The linear programming bound for binary linear codes
Brouwer, A.E.
1993-01-01
Combining Delsarte's (1973) linear programming bound with the information that certain weights cannot occur, new upper bounds for dmin (n,k), the maximum possible minimum distance of a binary linear code with given word length n and dimension k, are derived.
Binary Linear-Time Erasure Decoding for Non-Binary LDPC codes
Savin, Valentin
2009-01-01
In this paper, we first introduce the extended binary representation of non-binary codes, which corresponds to a covering graph of the bipartite graph associated with the non-binary code. Then we show that non-binary codewords correspond to binary codewords of the extended representation that further satisfy some simplex-constraint: that is, bits lying over the same symbol-node of the non-binary graph must form a codeword of a simplex code. Applied to the binary erasure channel, this descript...
New binary linear codes which are dual transforms of good codes
Jaffe, D.B.; Simonis, J.
1999-01-01
If C is a binary linear code, one may choose a subset S of C, and form a new code CST which is the row space of the matrix having the elements of S as its columns. One way of picking S is to choose a subgroup H of Aut(C) and let S be some H-stable subset of C. Using (primarily) this method for
Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8
Sison, Virgilio; Remillion, Monica
2017-10-01
Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.
Detecting Malicious Code by Binary File Checking
Directory of Open Access Journals (Sweden)
Marius POPA
2014-01-01
Full Text Available The object, library and executable code is stored in binary files. Functionality of a binary file is altered when its content or program source code is changed, causing undesired effects. A direct content change is possible when the intruder knows the structural information of the binary file. The paper describes the structural properties of the binary object files, how the content can be controlled by a possible intruder and what the ways to identify malicious code in such kind of files. Because the object files are inputs in linking processes, early detection of the malicious content is crucial to avoid infection of the binary executable files.
Random linear codes in steganography
Directory of Open Access Journals (Sweden)
Kamil Kaczyński
2016-12-01
Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB
Directory of Open Access Journals (Sweden)
Rumen Daskalov
2017-07-01
Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].
Decoding Algorithms for Random Linear Network Codes
DEFF Research Database (Denmark)
Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank
2011-01-01
We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially...... achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...
A Fast Optimization Method for General Binary Code Learning.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
2016-09-22
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
Optimized reversible binary-coded decimal adders
DEFF Research Database (Denmark)
Thomsen, Michael Kirkedal; Glück, Robert
2008-01-01
Abstract Babu and Chowdhury [H.M.H. Babu, A.R. Chowdhury, Design of a compact reversible binary coded decimal adder circuit, Journal of Systems Architecture 52 (5) (2006) 272-282] recently proposed, in this journal, a reversible adder for binary-coded decimals. This paper corrects and optimizes...... their design. The optimized 1-decimal BCD full-adder, a 13 × 13 reversible logic circuit, is faster, and has lower circuit cost and less garbage bits. It can be used to build a fast reversible m-decimal BCD full-adder that has a delay of only m + 17 low-power reversible CMOS gates. For a 32-decimal (128-bit....... Keywords: Reversible logic circuit; Full-adder; Half-adder; Parallel adder; Binary-coded decimal; Application of reversible logic synthesis...
Squares of Random Linear Codes
DEFF Research Database (Denmark)
Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego
2015-01-01
a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...
Linear network error correction coding
Guang, Xuan
2014-01-01
There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an
An Optimal Linear Coding for Index Coding Problem
Pezeshkpour, Pouya
2015-01-01
An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...
Forms and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
We present a general theory to obtain linear network codes utilizing forms and obtain explicit families of equidimensional vector spaces, in which any pair of distinct vector spaces intersect in the same small dimension. The theory is inspired by the methods of the author utilizing the osculating...... spaces of Veronese varieties. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possibly altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The vector spaces in our construction are equidistant in the above metric and the distance between any pair of vector spaces is large making...
Adaptable recursive binary entropy coding technique
Kiely, Aaron B.; Klimesh, Matthew A.
2002-07-01
We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.
Non-binary Entanglement-assisted Stabilizer Quantum Codes
Riguang, Leng; Zhi, Ma
2011-01-01
In this paper, we show how to construct non-binary entanglement-assisted stabilizer quantum codes by using pre-shared entanglement between the sender and receiver. We also give an algorithm to determine the circuit for non-binary entanglement-assisted stabilizer quantum codes and some illustrated examples. The codes we constructed do not require the dual-containing constraint, and many non-binary classical codes, like non-binary LDPC codes, which do not satisfy the condition, can be used to c...
Time development of cascades by the binary collision approximation code
International Nuclear Information System (INIS)
Fukumura, A.; Ishino, S.; Sekimura, N.
1991-01-01
To link a molecular dynamic calculation to binary collision approximation codes to explore high energy cascade damage, time between consecutive collisions is introduced into the binary collision MARLOWE code. Calculated results for gold by the modified code show formation of sub-cascades and their spatial and time overlapping, which can affect formation of defect clusters. (orig.)
Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization
Sassatelli, Lucile; Declercq, David
2007-01-01
In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-bin...
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Analysis of Non-binary Hybrid LDPC Codes
Sassatelli, Lucile; Declercq, David
2008-01-01
In this paper, we analyse asymptotically a new class of LDPC codes called Non-binary Hybrid LDPC codes, which has been recently introduced. We use density evolution techniques to derive a stability condition for hybrid LDPC codes, and prove their threshold behavior. We study this stability condition to conclude on asymptotic advantages of hybrid LDPC codes compared to their non-hybrid counterparts.
Weight Distribution for Non-binary Cluster LDPC Code Ensemble
Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi
In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.
Speech coding code- excited linear prediction
Bäckström, Tom
2017-01-01
This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...
BHDD: Primordial black hole binaries code
Kavanagh, Bradley J.; Gaggero, Daniele; Bertone, Gianfranco
2018-06-01
BHDD (BlackHolesDarkDress) simulates primordial black hole (PBH) binaries that are clothed in dark matter (DM) halos. The software uses N-body simulations and analytical estimates to follow the evolution of PBH binaries formed in the early Universe.
Linear codes associated to determinantal varieties
DEFF Research Database (Denmark)
Beelen, Peter; Ghorpade, Sudhir R.; Hasan, Sartaj Ul
2015-01-01
We consider a class of linear codes associated to projective algebraic varieties defined by the vanishing of minors of a fixed size of a generic matrix. It is seen that the resulting code has only a small number of distinct weights. The case of varieties defined by the vanishing of 2×2 minors is ...
A symmetric Roos bound for linear codes
Duursma, I.M.; Pellikaan, G.R.
2006-01-01
The van Lint–Wilson AB-method yields a short proof of the Roos bound for the minimum distance of a cyclic code. We use the AB-method to obtain a different bound for the weights of a linear code. In contrast to the Roos bound, the role of the codes A and B in our bound is symmetric. We use the bound
Computer codes for designing proton linear accelerators
International Nuclear Information System (INIS)
Kato, Takao
1992-01-01
Computer codes for designing proton linear accelerators are discussed from the viewpoint of not only designing but also construction and operation of the linac. The codes are divided into three categories according to their purposes: 1) design code, 2) generation and simulation code, and 3) electric and magnetic fields calculation code. The role of each category is discussed on the basis of experience at KEK (the design of the 40-MeV proton linac and its construction and operation, and the design of the 1-GeV proton linac). We introduce our recent work relevant to three-dimensional calculation and supercomputer calculation: 1) tuning of MAFIA (three-dimensional electric and magnetic fields calculation code) for supercomputer, 2) examples of three-dimensional calculation of accelerating structures by MAFIA, 3) development of a beam transport code including space charge effects. (author)
Binary Systematic Network Coding for Progressive Packet Decoding
Jones, Andrew L.; Chatzigeorgiou, Ioannis; Tassi, Andrea
2015-01-01
We consider binary systematic network codes and investigate their capability of decoding a source message either in full or in part. We carry out a probability analysis, derive closed-form expressions for the decoding probability and show that systematic network coding outperforms conventional net- work coding. We also develop an algorithm based on Gaussian elimination that allows progressive decoding of source packets. Simulation results show that the proposed decoding algorithm can achieve ...
Binary codes with impulse autocorrelation functions for dynamic experiments
International Nuclear Information System (INIS)
Corran, E.R.; Cummins, J.D.
1962-09-01
A series of binary codes exist which have autocorrelation functions approximating to an impulse function. Signals whose behaviour in time can be expressed by such codes have spectra which are 'whiter' over a limited bandwidth and for a finite time than signals from a white noise generator. These codes are used to determine system dynamic responses using the correlation technique. Programmes have been written to compute codes of arbitrary length and to compute 'cyclic' autocorrelation and cross-correlation functions. Complete listings of these programmes are given, and a code of 1019 bits is presented. (author)
Min-Max decoding for non binary LDPC codes
Savin, Valentin
2008-01-01
Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the Min-Sum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field's cardinality and a reduced c...
Spread-spectrum communication using binary spatiotemporal chaotic codes
International Nuclear Information System (INIS)
Wang Xingang; Zhan Meng; Gong Xiaofeng; Lai, C.H.; Lai, Y.-C.
2005-01-01
We propose a scheme to generate binary code for baseband spread-spectrum communication by using a chain of coupled chaotic maps. We compare the performances of this type of spatiotemporal chaotic code with those of a conventional code used frequently in digital communication, the Gold code, and demonstrate that our code is comparable or even superior to the Gold code in several key aspects: security, bit error rate, code generation speed, and the number of possible code sequences. As the field of communicating with chaos faces doubts in terms of performance comparison with conventional digital communication schemes, our work gives a clear message that communicating with chaos can be advantageous and it deserves further attention from the nonlinear science community
Simulations of linear and Hamming codes using SageMath
Timur, Tahta D.; Adzkiya, Dieky; Soleha
2018-03-01
Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.
Resettable binary latch mechanism for use with paraffin linear motors
Maus, Daryl; Tibbitts, Scott
1991-01-01
A new resettable Binary Latch Mechanism was developed utilizing a paraffin actuator as the motor. This linear actuator alternately latches between extended and retracted positions, maintaining either position with zero power consumption. The design evolution and kinematics of the latch mechanism are presented, as well as the development problems and lessons that were learned.
Sparsity in Linear Predictive Coding of Speech
DEFF Research Database (Denmark)
Giacobello, Daniele
of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...
Non-binary unitary error bases and quantum codes
Energy Technology Data Exchange (ETDEWEB)
Knill, E.
1996-06-01
Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.
Blind Recognition of Binary BCH Codes for Cognitive Radios
Directory of Open Access Journals (Sweden)
Jing Zhou
2016-01-01
Full Text Available A novel algorithm of blind recognition of Bose-Chaudhuri-Hocquenghem (BCH codes is proposed to solve the problem of Adaptive Coding and Modulation (ACM in cognitive radio systems. The recognition algorithm is based on soft decision situations. The code length is firstly estimated by comparing the Log-Likelihood Ratios (LLRs of the syndromes, which are obtained according to the minimum binary parity check matrixes of different primitive polynomials. After that, by comparing the LLRs of different minimum polynomials, the code roots and generator polynomial are reconstructed. When comparing with some previous approaches, our algorithm yields better performance even on very low Signal-Noise-Ratios (SNRs with lower calculation complexity. Simulation results show the efficiency of the proposed algorithm.
New extremal binary self-dual codes of lengths 64 and 66 from bicubic planar graphs
Kaya, Abidin
2016-01-01
In this work, connected cubic planar bipartite graphs and related binary self-dual codes are studied. Binary self-dual codes of length 16 are obtained by face-vertex incidence matrices of these graphs. By considering their lifts to the ring R_2 new extremal binary self-dual codes of lengths 64 are constructed as Gray images. More precisely, we construct 15 new codes of length 64. Moreover, 10 new codes of length 66 were obtained by applying a building-up construction to the binary codes. Code...
Non-Binary Protograph-Based LDPC Codes: Analysis,Enumerators and Designs
Sun, Yizeng
2013-01-01
Non-binary LDPC codes can outperform binary LDPC codes using sum-product algorithm with higher computation complexity. Non-binary LDPC codes based on protographs have the advantage of simple hardware architecture. In the first part of this thesis, we will use EXIT chart analysis to compute the thresholds of different protographs over GF(q). Based on threshold computation, some non-binary protograph-based LDPC codes are designed and their frame error rates are compared with binary LDPC codes. ...
Linear Stability of Binary Alloy Solidification for Unsteady Growth Rates
Mazuruk, K.; Volz, M. P.
2010-01-01
An extension of the Mullins and Sekerka (MS) linear stability analysis to the unsteady growth rate case is considered for dilute binary alloys. In particular, the stability of the planar interface during the initial solidification transient is studied in detail numerically. The rapid solidification case, when the system is traversing through the unstable region defined by the MS criterion, has also been treated. It has been observed that the onset of instability is quite accurately defined by the "quasi-stationary MS criterion", when the growth rate and other process parameters are taken as constants at a particular time of the growth process. A singular behavior of the governing equations for the perturbed quantities at the constitutional supercooling demarcation line has been observed. However, when the solidification process, during its transient, crosses this demarcation line, a planar interface is stable according to the linear analysis performed.
Some new quasi-twisted ternary linear codes
Directory of Open Access Journals (Sweden)
Rumen Daskalov
2015-09-01
Full Text Available Let [n, k, d]_q code be a linear code of length n, dimension k and minimum Hamming distance d over GF(q. One of the basic and most important problems in coding theory is to construct codes with best possible minimum distances. In this paper seven quasi-twisted ternary linear codes are constructed. These codes are new and improve the best known lower bounds on the minimum distance in [6].
SYMBOL LEVEL DECODING FOR DUO-BINARY TURBO CODES
Directory of Open Access Journals (Sweden)
Yogesh Beeharry
2017-05-01
Full Text Available This paper investigates the performance of three different symbol level decoding algorithms for Duo-Binary Turbo codes. Explicit details of the computations involved in the three decoding techniques, and a computational complexity analysis are given. Simulation results with different couple lengths, code-rates, and QPSK modulation reveal that the symbol level decoding with bit-level information outperforms the symbol level decoding by 0.1 dB on average in the error floor region. Moreover, a complexity analysis reveals that symbol level decoding with bit-level information reduces the decoding complexity by 19.6 % in terms of the total number of computations required for each half-iteration as compared to symbol level decoding.
Performance Analysis of New Binary User Codes for DS-CDMA Communication
Usha, Kamle; Jaya Sankar, Kottareddygari
2016-03-01
This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.
The Technique of Binary Code Decompilation and Its Application in Information Security Sphere
Directory of Open Access Journals (Sweden)
M. O. Shudrak
2012-12-01
Full Text Available The authors describes a new technique of binary code decompilation and its application possibility in information security such as software protection against reverse engineering and code obfuscation analyze in malware.
International Nuclear Information System (INIS)
Matijevič, Gal; Prša, Andrej; Orosz, Jerome A.; Welsh, William F.; Bloemen, Steven; Barclay, Thomas
2012-01-01
We present an automated classification of 2165 Kepler eclipsing binary (EB) light curves that accompanied the second Kepler data release. The light curves are classified using locally linear embedding, a general nonlinear dimensionality reduction tool, into morphology types (detached, semi-detached, overcontact, ellipsoidal). The method, related to a more widely used principal component analysis, produces a lower-dimensional representation of the input data while preserving local geometry and, consequently, the similarity between neighboring data points. We use this property to reduce the dimensionality in a series of steps to a one-dimensional manifold and classify light curves with a single parameter that is a measure of 'detachedness' of the system. This fully automated classification correlates well with the manual determination of morphology from the data release, and also efficiently highlights any misclassified objects. Once a lower-dimensional projection space is defined, the classification of additional light curves runs in a negligible time and the method can therefore be used as a fully automated classifier in pipeline structures. The classifier forms a tier of the Kepler EB pipeline that pre-processes light curves for the artificial intelligence based parameter estimator.
Riemann-Roch Spaces and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
We construct linear network codes utilizing algebraic curves over finite fields and certain associated Riemann-Roch spaces and present methods to obtain their parameters. In particular we treat the Hermitian curve and the curves associated with the Suzuki and Ree groups all having the maximal...... number of points for curves of their respective genera. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possibly altered vector space. Ralf Koetter and Frank R. Kschischang %\\cite{DBLP:journals/tit/KoetterK08} introduced...... in the above metric making them suitable for linear network coding....
Non-linearity parameter of binary liquid mixtures at elevated pressures
Indian Academy of Sciences (India)
. Ultrasonic studies in liquid mixtures provide valuable information about structure and interaction in such systems. The present investigation comprises of theoretical evaluation of the acoustic non-linearity parameter / of four binary liquid ...
Superlattice configurations in linear chain hydrocarbon binary mixtures
Indian Academy of Sciences (India)
Unknown
Long-chain alkanes; binary mixtures; superlattices; discrete orientational changes. 1. Introduction ... tem and a model of superlattice configuration was proposed4, in terms of .... C18 system,4 the angle with value = 3⋅3° was seen to play an ...
Construction of Protograph LDPC Codes with Linear Minimum Distance
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Context based Coding of Binary Shapes by Object Boundary Straightness Analysis
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren
2004-01-01
A new lossless compression scheme for bilevel images targeted at binary shapes of image and video objects is presented. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used in the context definition for arithmetic encoding....... Tested on individual images of binary shapes and binary layers of digital maps the algorithm outperforms PWC, JBIG and MPEG-4 CAE. On the binary shapes the code lengths are reduced by 21%, 25%, and 42%, respectively. On the maps the reductions are 34%, 32%, and 59%, respectively. The algorithm is also...
Further results on binary convolutional codes with an optimum distance profile
DEFF Research Database (Denmark)
Johannesson, Rolf; Paaske, Erik
1978-01-01
Fixed binary convolutional codes are considered which are simultaneously optimal or near-optimal according to three criteria: namely, distance profiled, free distanced_{ infty}, and minimum number of weightd_{infty}paths. It is shown how the optimum distance profile criterion can be used to limit...... codes. As a counterpart to quick-look-in (QLI) codes which are not "transparent," we introduce rateR = 1/2easy-look-in-transparent (ELIT) codes with a feedforward inverse(1 + D,D). In general, ELIT codes haved_{infty}superior to that of QLI codes....
Osculating Spaces of Varieties and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
2013-01-01
We present a general theory to obtain good linear network codes utilizing the osculating nature of algebraic varieties. In particular, we obtain from the osculating spaces of Veronese varieties explicit families of equidimensional vector spaces, in which any pair of distinct vector spaces...... intersects in the same dimension. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The obtained osculating spaces of Veronese varieties are equidistant in the above metric. The parameters of the resulting linear network codes...
Osculating Spaces of Varieties and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
We present a general theory to obtain good linear network codes utilizing the osculating nature of algebraic varieties. In particular, we obtain from the osculating spaces of Veronese varieties explicit families of equideminsional vector spaces, in which any pair of distinct vector spaces...... intersects in the same dimension. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The obtained osculating spaces of Veronese varieties are equidistant in the above metric. The parameters of the resulting linear network codes...
ICAROG: a computer code that converts a WIMSD/4 format library of BCD code to binary and vice versa
International Nuclear Information System (INIS)
Caldeira, A.D.
1991-09-01
A program called ICAROG, developed for the CYBER 170/750 system, that converts from BCD to binary code and vice versa a nuclear data library in WIMSD/4 program format is presented. ICAROG has also the capability of separating from the library isotopes specified by the user. (author)
Directory of Open Access Journals (Sweden)
Yimeng Zhang
2013-05-01
Full Text Available A method of blind recognition of the coding parameters for binary Bose-Chaudhuri-Hocquenghem (BCH codes is proposed in this paper. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary BCH codes, while the coding parameters are unknown. The problem can be addressed on the context of the non-cooperative communications or adaptive coding and modulations (ACM for cognitive radio networks. The recognition processing includes two major procedures: code length estimation and generator polynomial reconstruction. A hard decision method has been proposed in a previous literature. In this paper we propose the recognition approach in soft decision situations with Binary-Phase-Shift-Key modulations and Additive-White-Gaussian-Noise (AWGN channels. The code length is estimated by maximizing the root information dispersion entropy function. And then we search for the code roots to reconstruct the primitive and generator polynomials. By utilizing the soft output of the channel, the recognition performance is improved and the simulations show the efficiency of the proposed algorithm.
Random linear network coding for streams with unequally sized packets
DEFF Research Database (Denmark)
Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk
2016-01-01
State of the art Random Linear Network Coding (RLNC) schemes assume that data streams generate packets with equal sizes. This is an assumption that results in the highest efficiency gains for RLNC. A typical solution for managing unequal packet sizes is to zero-pad the smallest packets. However, ...
Linear and nonlinear verification of gyrokinetic microstability codes
Bravenec, R. V.; Candy, J.; Barnes, M.; Holland, C.
2011-12-01
Verification of nonlinear microstability codes is a necessary step before comparisons or predictions of turbulent transport in toroidal devices can be justified. By verification we mean demonstrating that a code correctly solves the mathematical model upon which it is based. Some degree of verification can be accomplished indirectly from analytical instability threshold conditions, nonlinear saturation estimates, etc., for relatively simple plasmas. However, verification for experimentally relevant plasma conditions and physics is beyond the realm of analytical treatment and must rely on code-to-code comparisons, i.e., benchmarking. The premise is that the codes are verified for a given problem or set of parameters if they all agree within a specified tolerance. True verification requires comparisons for a number of plasma conditions, e.g., different devices, discharges, times, and radii. Running the codes and keeping track of linear and nonlinear inputs and results for all conditions could be prohibitive unless there was some degree of automation. We have written software to do just this and have formulated a metric for assessing agreement of nonlinear simulations. We present comparisons, both linear and nonlinear, between the gyrokinetic codes GYRO [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and GS2 [W. Dorland, F. Jenko, M. Kotschenreuther, and B. N. Rogers, Phys. Rev. Lett. 85, 5579 (2000)]. We do so at the mid-radius for the same discharge as in earlier work [C. Holland, A. E. White, G. R. McKee, M. W. Shafer, J. Candy, R. E. Waltz, L. Schmitz, and G. R. Tynan, Phys. Plasmas 16, 052301 (2009)]. The comparisons include electromagnetic fluctuations, passing and trapped electrons, plasma shaping, one kinetic impurity, and finite Debye-length effects. Results neglecting and including electron collisions (Lorentz model) are presented. We find that the linear frequencies with or without collisions agree well between codes, as do the time averages of
PopCORN: Hunting down the differences between binary population synthesis codes
Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.
2014-02-01
Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough
Gallager error-correcting codes for binary asymmetric channels
International Nuclear Information System (INIS)
Neri, I; Skantzos, N S; Bollé, D
2008-01-01
We derive critical noise levels for Gallager codes on asymmetric channels as a function of the input bias and the temperature. Using a statistical mechanics approach we study the space of codewords and the entropy in the various decoding regimes. We further discuss the relation of the convergence of the message passing algorithm with the endogenous property and complexity, characterizing solutions of recursive equations of distributions for cavity fields
Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission
Directory of Open Access Journals (Sweden)
Tarek Chehade
2015-01-01
Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.
Reliability of Broadcast Communications Under Sparse Random Linear Network Coding
Brown, Suzie; Johnson, Oliver; Tassi, Andrea
2018-01-01
Ultra-reliable Point-to-Multipoint (PtM) communications are expected to become pivotal in networks offering future dependable services for smart cities. In this regard, sparse Random Linear Network Coding (RLNC) techniques have been widely employed to provide an efficient way to improve the reliability of broadcast and multicast data streams. This paper addresses the pressing concern of providing a tight approximation to the probability of a user recovering a data stream protected by this kin...
A Tough Call : Mitigating Advanced Code-Reuse Attacks at the Binary Level
Veen, Victor Van Der; Goktas, Enes; Contag, Moritz; Pawoloski, Andre; Chen, Xi; Rawat, Sanjay; Bos, Herbert; Holz, Thorsten; Athanasopoulos, Ilias; Giuffrida, Cristiano
2016-01-01
Current binary-level Control-Flow Integrity (CFI) techniques are weak in determining the set of valid targets for indirect control flow transfers on the forward edge. In particular, the lack of source code forces existing techniques to resort to a conservative address-taken policy that
Deep Learning Methods for Improved Decoding of Linear Codes
Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair
2018-02-01
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
Linear dispersion codes in space-frequency domain for SCFDE
DEFF Research Database (Denmark)
Marchetti, Nicola; Cianca, Ernestina; Prasad, Ramjee
2007-01-01
This paper presents a general framework for applying the Linear Dispersion Codes (LDC) in the space and frequency domains to Single Carrier - Frequency Domain Equalization (SCFDE) systems. Space-Frequency (SF)LDC are more suitable than Space-Time (ST)-LDC in high mobility environment. However......, the application of LDC in space-frequency domain in SCFDE systems is not straightforward as in Orthogonal Frequency Division Multiplexing (OFDM), since there is no direct access to the subcarriers at the transmitter. This paper describes how to build the space-time dispersion matrices to be used...
Gao, Jian; Wang, Yongkang
2018-01-01
Structural properties of u-constacyclic codes over the ring F_p+u{F}_p are given, where p is an odd prime and u^2=1. Under a special Gray map from F_p+u{F}_p to F_p^2, some new non-binary quantum codes are obtained by this class of constacyclic codes.
An upper bound for codes for the noisy two-access binary adder channel
Tilborg, van H.C.A.
1986-01-01
Using earlier methods a combinatorial upper bound is derived for|C|. cdot |D|, where(C,D)is adelta-decodable code pair for the noisy two-access binary adder channel. Asymptotically, this bound reduces toR_{1}=R_{2} leq frac{3}{2} + elog_{2} e - (frac{1}{2} + e) log_{2} (1 + 2e)= frac{1}{2} - e +
Joint beam design and user selection over non-binary coded MIMO interference channel
Li, Haitao; Yuan, Haiying
2013-03-01
In this paper, we discuss the problem of sum rate improvement for coded MIMO interference system, and propose joint beam design and user selection over interference channel. Firstly, we have formulated non-binary LDPC coded MIMO interference networks model. Then, the least square beam design for MIMO interference system is derived, and the low complexity user selection is presented. Simulation results confirm that the sum rate can be improved by the joint user selection and beam design comparing with single interference aligning beamformer.
Binary codes storage and data encryption in substrates with single proton beam writing technology
International Nuclear Information System (INIS)
Zhang Jun; Zhan Furu; Hu Zhiwen; Chen Lianyun; Yu Zengliang
2006-01-01
It has been demonstrated that characters can be written by proton beams in various materials. In contributing to the rapid development of proton beam writing technology, we introduce a new method for binary code storage and data encryption by writing binary codes of characters (BCC) in substrates with single proton beam writing technology. In this study, two kinds of BCC (ASCII BCC and long bit encrypted BCC) were written in CR-39 by a 2.6 MeV single proton beam. Our results show that in comparison to directly writing character shapes, writing ASCII BCC turned out to be about six times faster and required about one fourth the area in substrates. The approach of writing long bit encrypted BCC by single proton beams supports preserving confidential information in substrates. Additionally, binary codes fabricated by MeV single proton beams in substrates are more robust than those formed by lasers, since MeV single proton beams can make much deeper pits in the substrates
The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer
Kochoska, Angela; Prša, Andrej; Horvat, Martin
2018-01-01
Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and
Construction of Fixed Rate Non-Binary WOM Codes Based on Integer Programming
Fujino, Yoju; Wadayama, Tadashi
In this paper, we propose a construction of non-binary WOM (Write-Once-Memory) codes for WOM storages such as flash memories. The WOM codes discussed in this paper are fixed rate WOM codes where messages in a fixed alphabet of size $M$ can be sequentially written in the WOM storage at least $t^*$-times. In this paper, a WOM storage is modeled by a state transition graph. The proposed construction has the following two features. First, it includes a systematic method to determine the encoding regions in the state transition graph. Second, the proposed construction includes a labeling method for states by using integer programming. Several novel WOM codes for $q$ level flash memories with 2 cells are constructed by the proposed construction. They achieve the worst numbers of writes $t^*$ that meet the known upper bound in many cases. In addition, we constructed fixed rate non-binary WOM codes with the capability to reduce ICI (inter cell interference) of flash cells. One of the advantages of the proposed construction is its flexibility. It can be applied to various storage devices, to various dimensions (i.e, number of cells), and various kind of additional constraints.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Random Linear Network Coding for 5G Mobile Video Delivery
Directory of Open Access Journals (Sweden)
Dejan Vukobratovic
2018-03-01
Full Text Available An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G 3GPP New Radio (NR standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC. In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research.
Monte Carlo simulation of medical linear accelerator using primo code
International Nuclear Information System (INIS)
Omer, Mohamed Osman Mohamed Elhasan
2014-12-01
The use of monte Carlo simulation has become very important in the medical field and especially in calculation in radiotherapy. Various Monte Carlo codes were developed simulating interactions of particles and photons with matter. One of these codes is PRIMO that performs simulation of radiation transport from the primary electron source of a linac to estimate the absorbed dose in a water phantom or computerized tomography (CT). PRIMO is based on Penelope Monte Carlo code. Measurements of 6 MV photon beam PDD and profile were done for Elekta precise linear accelerator at Radiation and Isotopes Center Khartoum using computerized Blue water phantom and CC13 Ionization Chamber. accept Software was used to control the phantom to measure and verify dose distribution. Elektalinac from the list of available linacs in PRIMO was tuned to model Elekta precise linear accelerator. Beam parameter of 6.0 MeV initial electron energy, 0.20 MeV FWHM, and 0.20 cm focal spot FWHM were used, and an error of 4% between calculated and measured curves was found. The buildup region Z max was 1.40 cm and homogenous profile in cross line and in line were acquired. A number of studies were done to verily the model usability one of them is the effect of the number of histories on accuracy of the simulation and the resulted profile for the same beam parameters. The effect was noticeable and inaccuracies in the profile were reduced by increasing the number of histories. Another study was the effect of Side-step errors on the calculated dose which was compared with the measured dose for the same setting.It was in range of 2% for 5 cm shift, but it was higher in the calculated dose because of the small difference between the tuned model and measured dose curves. Future developments include simulating asymmetrical fields, calculating the dose distribution in computerized tomographic (CT) volume, studying the effect of beam modifiers on beam profile for both electron and photon beams.(Author)
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright Â© 2016 Elsevier Ireland Ltd. All rights reserved.
Coding Local and Global Binary Visual Features Extracted From Video Sequences
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.
Coding Local and Global Binary Visual Features Extracted From Video Sequences.
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.
Binary Large Object-Based Approach for QR Code Detection in Uncontrolled Environments
Directory of Open Access Journals (Sweden)
Omar Lopez-Rincon
2017-01-01
Full Text Available Quick Response QR barcode detection in nonarbitrary environment is still a challenging task despite many existing applications for finding 2D symbols. The main disadvantage of recent applications for QR code detection is a low performance for rotated and distorted single or multiple symbols in images with variable illumination and presence of noise. In this paper, a particular solution for QR code detection in uncontrolled environments is presented. The proposal consists in recognizing geometrical features of QR code using a binary large object- (BLOB- based algorithm with subsequent iterative filtering QR symbol position detection patterns that do not require complex processing and training of classifiers frequently used for these purposes. The high precision and speed are achieved by adaptive threshold binarization of integral images. In contrast to well-known scanners, which fail to detect QR code with medium to strong blurring, significant nonuniform illumination, considerable symbol deformations, and noising, the proposed technique provides high recognition rate of 80%–100% with a speed compatible to real-time applications. In particular, speed varies from 200 ms to 800 ms per single or multiple QR code detected simultaneously in images with resolution from 640 × 480 to 4080 × 2720, respectively.
FBC: a flat binary code scheme for fast Manhattan hash retrieval
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
Hariharan, M; Sindhu, R; Vijean, Vikneswaran; Yazid, Haniza; Nadarajaw, Thiyagar; Yaacob, Sazali; Polat, Kemal
2018-03-01
Infant cry signal carries several levels of information about the reason for crying (hunger, pain, sleepiness and discomfort) or the pathological status (asphyxia, deaf, jaundice, premature condition and autism, etc.) of an infant and therefore suited for early diagnosis. In this work, combination of wavelet packet based features and Improved Binary Dragonfly Optimization based feature selection method was proposed to classify the different types of infant cry signals. Cry signals from 2 different databases were utilized. First database contains 507 cry samples of normal (N), 340 cry samples of asphyxia (A), 879 cry samples of deaf (D), 350 cry samples of hungry (H) and 192 cry samples of pain (P). Second database contains 513 cry samples of jaundice (J), 531 samples of premature (Prem) and 45 samples of normal (N). Wavelet packet transform based energy and non-linear entropies (496 features), Linear Predictive Coding (LPC) based cepstral features (56 features), Mel-frequency Cepstral Coefficients (MFCCs) were extracted (16 features). The combined feature set consists of 568 features. To overcome the curse of dimensionality issue, improved binary dragonfly optimization algorithm (IBDFO) was proposed to select the most salient attributes or features. Finally, Extreme Learning Machine (ELM) kernel classifier was used to classify the different types of infant cry signals using all the features and highly informative features as well. Several experiments of two-class and multi-class classification of cry signals were conducted. In binary or two-class experiments, maximum accuracy of 90.18% for H Vs P, 100% for A Vs N, 100% for D Vs N and 97.61% J Vs Prem was achieved using the features selected (only 204 features out of 568) by IBDFO. For the classification of multiple cry signals (multi-class problem), the selected features could differentiate between three classes (N, A & D) with the accuracy of 100% and seven classes with the accuracy of 97.62%. The experimental
Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery
Directory of Open Access Journals (Sweden)
Fan Hu
2016-06-01
Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.
International Nuclear Information System (INIS)
Vallee, R.L.
1968-01-01
The study of binary groups under their mathematical aspects constitutes the matter of binary analysis, the purpose of which consists in developing altogether simple, rigorous and practical methods needed by the technicians, the engineers and all those who may be mainly concerned by digital processing. This subject, fast extending if not determining, however tends actually to play a main part in nuclear electronics as well as in several other research areas. (authors) [fr
FFT Algorithm for Binary Extension Finite Fields and Its Application to Reed–Solomon Codes
Lin, Sian Jheng
2016-08-15
Recently, a new polynomial basis over binary extension fields was proposed, such that the fast Fourier transform (FFT) over such fields can be computed in the complexity of order O(n lg(n)), where n is the number of points evaluated in FFT. In this paper, we reformulate this FFT algorithm, such that it can be easier understood and be extended to develop frequency-domain decoding algorithms for (n = 2(m), k) systematic Reed-Solomon (RS) codes over F-2m, m is an element of Z(+), with n-k a power of two. First, the basis of syndrome polynomials is reformulated in the decoding procedure so that the new transforms can be applied to the decoding procedure. A fast extended Euclidean algorithm is developed to determine the error locator polynomial. The computational complexity of the proposed decoding algorithm is O(n lg(n-k)+(n-k)lg(2)(n-k)), improving upon the best currently available decoding complexity O(n lg(2)(n) lg lg(n)), and reaching the best known complexity bound that was established by Justesen in 1976. However, Justesen\\'s approach is only for the codes over some specific fields, which can apply Cooley-Tukey FFTs. As revealed by the computer simulations, the proposed decoding algorithm is 50 times faster than the conventional one for the (2(16), 2(15)) RS code over F-216.
International Nuclear Information System (INIS)
Watanabe, Nishio; Shimomura, Yoshiharu
1985-01-01
The derivation of basic equations of the computer simulation code 'MARLOWE' was examined in detail, which was treated by the binary collision approximation developed by Robinson and Torrens. The 'MARLOWE' program was used for the simulation of the three dimensional structure of displacement cascade damages of Au, Cu and Al, which were generated by primary knock-on atoms (PKA) of 1 keV to 40 keV. Results were seriously affected by the selection of parameter of Frenkel defect formation E disp and ion movement E quit with the close Frenkel defect recombination criteria and E disp = E quit , it was found that E disp of 11 eV, 5 eV, 5 eV are reasonable for the simulation calculation for Au, Cu, and Al, respectively. Cascade seems to have subcascade structures even for 40 keV PKA. (author)
Adaptation of Zerotrees Using Signed Binary Digit Representations for 3D Image Coding
Directory of Open Access Journals (Sweden)
Mailhes Corinne
2007-01-01
Full Text Available Zerotrees of wavelet coefficients have shown a good adaptability for the compression of three-dimensional images. EZW, the original algorithm using zerotree, shows good performance and was successfully adapted to 3D image compression. This paper focuses on the adaptation of EZW for the compression of hyperspectral images. The subordinate pass is suppressed to remove the necessity to keep the significant pixels in memory. To compensate the loss due to this removal, signed binary digit representations are used to increase the efficiency of zerotrees. Contextual arithmetic coding with very limited contexts is also used. Finally, we show that this simplified version of 3D-EZW performs almost as well as the original one.
SPORTS - a simple non-linear thermalhydraulic stability code
International Nuclear Information System (INIS)
Chatoorgoon, V.
1986-01-01
A simple code, called SPORTS, has been developed for two-phase stability studies. A novel method of solution of the finite difference equations was deviced and incorporated, and many of the approximations that are common in other stability codes are avoided. SPORTS is believed to be accurate and efficient, as small and large time-steps are permitted, and hence suitable for micro-computers. (orig.)
Linear tree codes and the problem of explicit constructions
Czech Academy of Sciences Publication Activity Database
Pudlák, Pavel
2016-01-01
Roč. 490, February 1 (2016), s. 124-144 ISSN 0024-3795 R&D Projects: GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : tree code * error correcting code * triangular totally nonsingular matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016 http://www.sciencedirect.com/science/article/pii/S002437951500645X
Vectorized Matlab Codes for Linear Two-Dimensional Elasticity
Directory of Open Access Journals (Sweden)
Jonas Koko
2007-01-01
Full Text Available A vectorized Matlab implementation for the linear finite element is provided for the two-dimensional linear elasticity with mixed boundary conditions. Vectorization means that there is no loop over triangles. Numerical experiments show that our implementation is more efficient than the standard implementation with a loop over all triangles.
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Two "dual" families of Nearly-Linear Codes over ℤ p , p odd
Asch, van A.G.; Tilborg, van H.C.A.
2001-01-01
Since the paper by Hammons e.a. [1], various authors have shown an enormous interest in linear codes over the ring Z4. A special weight function on Z4 was introduced and by means of the so called Gray map ¿ : Z4¿Z2 2 a relation was established between linear codes over Z4 and certain interesting
Performance analysis of linear codes under maximum-likelihood decoding: a tutorial
National Research Council Canada - National Science Library
Sason, Igal; Shamai, Shlomo
2006-01-01
..., upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we ad...
Rate-Compatible LDPC Codes with Linear Minimum Distance
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Iterative solution of linear equations in ODE codes. [Krylov subspaces
Energy Technology Data Exchange (ETDEWEB)
Gear, C. W.; Saad, Y.
1981-01-01
Each integration step of a stiff equation involves the solution of a nonlinear equation, usually by a quasi-Newton method that leads to a set of linear problems. Iterative methods for these linear equations are studied. Of particular interest are methods that do not require an explicit Jacobian, but can work directly with differences of function values using J congruent to f(x + delta) - f(x). Some numerical experiments using a modification of LSODE are reported. 1 figure, 2 tables.
Determination of recombination radius in Si for binary collision approximation codes
International Nuclear Information System (INIS)
Vizkelethy, Gyorgy; Foiles, Stephen M.
2016-01-01
Displacement damage caused by ions or neutrons in microelectronic devices can have significant effect on the performance of these devices. Therefore, it is important to predict not only the displacement damage profile, but also its magnitude precisely. Analytical methods and binary collision approximation codes working with amorphous targets use the concept of displacement energy, the energy that a lattice atom has to receive to create a permanent replacement. It was found that this “displacement energy” is direction dependent; it can range from 12 to 32 eV in silicon. Obviously, this model fails in BCA codes that work with crystalline targets, such as Marlowe. Marlowe does not use displacement energy; instead, it uses lattice binding energy only and then pairs the interstitial atoms with vacancies. Then based on the configuration of the Frenkel pairs it classifies them as close, near, or distant pairs, and considers the distant pairs the permanent replacements. Unfortunately, this separation is an ad hoc assumption, and the results do not agree with molecular dynamics calculations. After irradiation, there is a prompt recombination of interstitials and vacancies if they are nearby, within a recombination radius. In order to implement this recombination radius in Marlowe, we used the comparison of MD and Marlowe calculation in a range of ion energies in single crystal silicon target. The calculations showed that a single recombination radius of ∼7.4 Å in Marlowe for a range of ion energies gives an excellent agreement with MD.
Super-linear Precision in Simple Neural Population Codes
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
Development of non-linear vibration analysis code for CANDU fuelling machine
International Nuclear Information System (INIS)
Murakami, Hajime; Hirai, Takeshi; Horikoshi, Kiyomi; Mizukoshi, Kaoru; Takenaka, Yasuo; Suzuki, Norio.
1988-01-01
This paper describes the development of a non-linear, dynamic analysis code for the CANDU 600 fuelling machine (F-M), which includes a number of non-linearities such as gap with or without Coulomb friction, special multi-linear spring connections, etc. The capabilities and features of the code and the mathematical treatment for the non-linearities are explained. The modeling and numerical methodology for the non-linearities employed in the code are verified experimentally. Finally, the simulation analyses for the full-scale F-M vibration testing are carried out, and the applicability of the code to such multi-degree of freedom systems as F-M is demonstrated. (author)
Dynamics of High-Order Spin-Orbit Couplings about Linear Momenta in Compact Binary Systems*
International Nuclear Information System (INIS)
Huang Li; Wu Xin; Huang Guo-Qing; Mei Li-Jie
2017-01-01
This paper relates to the post-Newtonian Hamiltonian dynamics of spinning compact binaries, consisting of the Newtonian Kepler problem and the leading, next-to-leading and next-to-next-to-leading order spin-orbit couplings as linear functions of spins and momenta. When this Hamiltonian form is transformed to a Lagrangian form, besides the terms corresponding to the same order terms in the Hamiltonian, several additional terms, third post-Newtonian (3PN), 4PN, 5PN, 6PN and 7PN order spin-spin coupling terms, yield in the Lagrangian. That means that the Hamiltonian is nonequivalent to the Lagrangian at the same PN order but is exactly equivalent to the full Lagrangian without any truncations. The full Lagrangian without the spin-spin couplings truncated is integrable and regular. Whereas it is non-integrable and becomes possibly chaotic when any one of the spin-spin terms is dropped. These results are also supported numerically. (paper)
Short binary convolutional codes with maximal free distance for rates 2/3 and 3/4
DEFF Research Database (Denmark)
Paaske, Erik
1974-01-01
. Farther, the search among the remaining codes is started in a subset in which we expect the possibility of finding codes with large values ofd_{free}to be good. A number of short, optimum (in the sense of maximizingd_{free}), rate-2/3 and 3/4 codes found by the search procedure are listed.......A search procedure is developed to find good short binary(N,N - 1)convolutional codes. It uses simple rules to discard from the complete ensemble of codes a large fraction whose free distanced_{free}either cannot achieve the maximum value or is equal tod_{free}of some code in the remaining set...
Equidistant Linear Network Codes with maximal Error-protection from Veronese Varieties
DEFF Research Database (Denmark)
Hansen, Johan P.
2012-01-01
Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vectorspace. Ralf Koetter and Frank R. Kschischang in Coding for errors and erasures in random network coding (IEEE Transactions on Information Theory...... construct explicit families of vector-spaces of constant dimension where any pair of distinct vector-spaces are equidistant in the above metric. The parameters of the resulting linear network codes which have maximal error-protection are determined....
Linear-time general decoding algorithm for the surface code
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
Ma, Fanghui; Gao, Jian; Fu, Fang-Wei
2018-06-01
Let R={F}_q+v{F}_q+v2{F}_q be a finite non-chain ring, where q is an odd prime power and v^3=v. In this paper, we propose two methods of constructing quantum codes from (α +β v+γ v2)-constacyclic codes over R. The first one is obtained via the Gray map and the Calderbank-Shor-Steane construction from Euclidean dual-containing (α +β v+γ v2)-constacyclic codes over R. The second one is obtained via the Gray map and the Hermitian construction from Hermitian dual-containing (α +β v+γ v2)-constacyclic codes over R. As an application, some new non-binary quantum codes are obtained.
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
Learning binary code via PCA of angle projection for image retrieval
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
Protograph based LDPC codes with minimum distance linearly growing with block size
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
International Nuclear Information System (INIS)
Binh, Do Quang; Huy, Ngo Quang; Hai, Nguyen Hoang
2014-01-01
This paper presents a new approach based on a binary mixed integer coded genetic algorithm in conjunction with the weighted sum method for multi-objective optimization of fuel loading patterns for nuclear research reactors. The proposed genetic algorithm works with two types of chromosomes: binary and integer chromosomes, and consists of two types of genetic operators: one working on binary chromosomes and the other working on integer chromosomes. The algorithm automatically searches for the most suitable weighting factors of the weighting function and the optimal fuel loading patterns in the search process. Illustrative calculations are implemented for a research reactor type TRIGA MARK II loaded with the Russian VVR-M2 fuels. Results show that the proposed genetic algorithm can successfully search for both the best weighting factors and a set of approximate optimal loading patterns that maximize the effective multiplication factor and minimize the power peaking factor while satisfying operational and safety constraints for the research reactor.
Energy Technology Data Exchange (ETDEWEB)
Binh, Do Quang [University of Technical Education Ho Chi Minh City (Viet Nam); Huy, Ngo Quang [University of Industry Ho Chi Minh City (Viet Nam); Hai, Nguyen Hoang [Centre for Research and Development of Radiation Technology, Ho Chi Minh City (Viet Nam)
2014-12-15
This paper presents a new approach based on a binary mixed integer coded genetic algorithm in conjunction with the weighted sum method for multi-objective optimization of fuel loading patterns for nuclear research reactors. The proposed genetic algorithm works with two types of chromosomes: binary and integer chromosomes, and consists of two types of genetic operators: one working on binary chromosomes and the other working on integer chromosomes. The algorithm automatically searches for the most suitable weighting factors of the weighting function and the optimal fuel loading patterns in the search process. Illustrative calculations are implemented for a research reactor type TRIGA MARK II loaded with the Russian VVR-M2 fuels. Results show that the proposed genetic algorithm can successfully search for both the best weighting factors and a set of approximate optimal loading patterns that maximize the effective multiplication factor and minimize the power peaking factor while satisfying operational and safety constraints for the research reactor.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Integrating genomics and proteomics data to predict drug effects using binary linear programming.
Ji, Zhiwei; Su, Jing; Liu, Chenglin; Wang, Hongyan; Huang, Deshuang; Zhou, Xiaobo
2014-01-01
The Library of Integrated Network-Based Cellular Signatures (LINCS) project aims to create a network-based understanding of biology by cataloging changes in gene expression and signal transduction that occur when cells are exposed to a variety of perturbations. It is helpful for understanding cell pathways and facilitating drug discovery. Here, we developed a novel approach to infer cell-specific pathways and identify a compound's effects using gene expression and phosphoproteomics data under treatments with different compounds. Gene expression data were employed to infer potential targets of compounds and create a generic pathway map. Binary linear programming (BLP) was then developed to optimize the generic pathway topology based on the mid-stage signaling response of phosphorylation. To demonstrate effectiveness of this approach, we built a generic pathway map for the MCF7 breast cancer cell line and inferred the cell-specific pathways by BLP. The first group of 11 compounds was utilized to optimize the generic pathways, and then 4 compounds were used to identify effects based on the inferred cell-specific pathways. Cross-validation indicated that the cell-specific pathways reliably predicted a compound's effects. Finally, we applied BLP to re-optimize the cell-specific pathways to predict the effects of 4 compounds (trichostatin A, MS-275, staurosporine, and digoxigenin) according to compound-induced topological alterations. Trichostatin A and MS-275 (both HDAC inhibitors) inhibited the downstream pathway of HDAC1 and caused cell growth arrest via activation of p53 and p21; the effects of digoxigenin were totally opposite. Staurosporine blocked the cell cycle via p53 and p21, but also promoted cell growth via activated HDAC1 and its downstream pathway. Our approach was also applied to the PC3 prostate cancer cell line, and the cross-validation analysis showed very good accuracy in predicting effects of 4 compounds. In summary, our computational model can be
Directory of Open Access Journals (Sweden)
Mihai-Victor PRICOP
2010-09-01
Full Text Available The present paper introduces a numerical approach of static linear elasticity equations for anisotropic materials. The domain and boundary conditions are simple, to enhance an easy implementation of the finite difference scheme. SOR and gradient are used to solve the resulting linear system. The simplicity of the geometry is also useful for MPI parallelization of the code.
Directory of Open Access Journals (Sweden)
A. A. Kovylin
2013-01-01
Full Text Available The article describes the problem of searching for binary pseudo-random sequences with quasi-ideal autocorrelation function, which are to be used in contemporary communication systems, including mobile and wireless data transfer interfaces. In the synthesis of binary sequences sets, the target set is manning them based on the minimax criterion by which a sequence is considered to be optimal according to the intended application. In the course of the research the optimal sequences with order of up to 52 were obtained; the analysis of Run Length Encoding was carried out. The analysis showed regularities in the distribution of series number of different lengths in the codes that are optimal on the chosen criteria, which would make it possible to optimize the searching process for such codes in the future.
Proceedings of the conference on computer codes and the linear accelerator community
International Nuclear Information System (INIS)
Cooper, R.K.
1990-07-01
The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned
Proceedings of the conference on computer codes and the linear accelerator community
Energy Technology Data Exchange (ETDEWEB)
Cooper, R.K. (comp.)
1990-07-01
The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.
Object-Oriented Parallel Particle-in-Cell Code for Beam Dynamics Simulation in Linear Accelerators
International Nuclear Information System (INIS)
Qiang, J.; Ryne, R.D.; Habib, S.; Decky, V.
1999-01-01
In this paper, we present an object-oriented three-dimensional parallel particle-in-cell code for beam dynamics simulation in linear accelerators. A two-dimensional parallel domain decomposition approach is employed within a message passing programming paradigm along with a dynamic load balancing. Implementing object-oriented software design provides the code with better maintainability, reusability, and extensibility compared with conventional structure based code. This also helps to encapsulate the details of communications syntax. Performance tests on SGI/Cray T3E-900 and SGI Origin 2000 machines show good scalability of the object-oriented code. Some important features of this code also include employing symplectic integration with linear maps of external focusing elements and using z as the independent variable, typical in accelerators. A successful application was done to simulate beam transport through three superconducting sections in the APT linac design
Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.
Energy Technology Data Exchange (ETDEWEB)
Preston, Leiph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-08-01
Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti) by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.
Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks
DEFF Research Database (Denmark)
Heide, J; Zhang, Qi; Fitzek, F H P
2013-01-01
This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...
STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM
Directory of Open Access Journals (Sweden)
H. Prashantha Kumar
2012-03-01
Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.
Non-linearity parameter of binary liquid mixtures at elevated pressures
Indian Academy of Sciences (India)
parameter B/A of four binary liquid mixtures using Tong and Dong equation at high pressures and .... in general as regular or ideal as no recognized association takes place between the unlike molecules. In this case ... Using the definition and.
Directory of Open Access Journals (Sweden)
Eric Z. Chen
2015-01-01
Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.
Berdyugin, A.; Piirola, V.; Sakanoi, T.; Kagitani, M.; Yoneda, M.
2018-03-01
Aim. To study the binary geometry of the classic Algol-type triple system λ Tau, we have searched for polarization variations over the orbital cycle of the inner semi-detached binary, arising from light scattering in the circumstellar material formed from ongoing mass transfer. Phase-locked polarization curves provide an independent estimate for the inclination i, orientation Ω, and the direction of the rotation for the inner orbit. Methods: Linear polarization measurements of λ Tau in the B, V , and R passbands with the high-precision Dipol-2 polarimeter have been carried out. The data have been obtained on the 60 cm KVA (Observatory Roque de los Muchachos, La Palma, Spain) and Tohoku 60 cm (Haleakala, Hawaii, USA) remotely controlled telescopes over 69 observing nights. Analytic and numerical modelling codes are used to interpret the data. Results: Optical polarimetry revealed small intrinsic polarization in λ Tau with 0.05% peak-to-peak variation over the orbital period of 3.95 d. The variability pattern is typical for binary systems showing strong second harmonic of the orbital period. We apply a standard analytical method and our own light scattering models to derive parameters of the inner binary orbit from the fit to the observed variability of the normalized Stokes parameters. From the analytical method, the average for three passband values of orbit inclination i = 76° + 1°/-2° and orientation Ω = 15°(195°) ± 2° are obtained. Scattering models give similar inclination values i = 72-76° and orbit orientation ranging from Ω = 16°(196°) to Ω = 19°(199°), depending on the geometry of the scattering cloud. The rotation of the inner system, as seen on the plane of the sky, is clockwise. We have found that with the scattering model the best fit is obtained for the scattering cloud located between the primary and the secondary, near the inner Lagrangian point or along the Roche lobe surface of the secondary facing the primary. The inclination i
Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model
DEFF Research Database (Denmark)
Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico
Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1)).......Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually...... abort) completely unrelated with m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However...
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
I-Ching, dyadic groups of binary numbers and the geno-logic coding in living bodies.
Hu, Zhengbing; Petoukhov, Sergey V; Petukhova, Elena S
2017-12-01
The ancient Chinese book I-Ching was written a few thousand years ago. It introduces the system of symbols Yin and Yang (equivalents of 0 and 1). It had a powerful impact on culture, medicine and science of ancient China and several other countries. From the modern standpoint, I-Ching declares the importance of dyadic groups of binary numbers for the Nature. The system of I-Ching is represented by the tables with dyadic groups of 4 bigrams, 8 trigrams and 64 hexagrams, which were declared as fundamental archetypes of the Nature. The ancient Chinese did not know about the genetic code of protein sequences of amino acids but this code is organized in accordance with the I-Ching: in particularly, the genetic code is constructed on DNA molecules using 4 nitrogenous bases, 16 doublets, and 64 triplets. The article also describes the usage of dyadic groups as a foundation of the bio-mathematical doctrine of the geno-logic code, which exists in parallel with the known genetic code of amino acids but serves for a different goal: to code the inherited algorithmic processes using the logical holography and the spectral logic of systems of genetic Boolean functions. Some relations of this doctrine with the I-Ching are discussed. In addition, the ratios of musical harmony that can be revealed in the parameters of DNA structure are also represented in the I-Ching book. Copyright © 2017 Elsevier Ltd. All rights reserved.
ON-SKY DEMONSTRATION OF A LINEAR BAND-LIMITED MASK WITH APPLICATION TO VISUAL BINARY STARS
International Nuclear Information System (INIS)
Crepp, J.; Ge, J.; Kravchenko, I.; Serabyn, E.; Carson, J.
2010-01-01
We have designed and built the first band-limited coronagraphic mask used for ground-based high-contrast imaging observations. The mask resides in the focal plane of the near-infrared camera PHARO at the Palomar Hale telescope and receives a well-corrected beam from an extreme adaptive optics system. Its performance on-sky with single stars is comparable to current state-of-the-art instruments: contrast levels of ∼10 -5 or better at 0.''8 in K s after post-processing, depending on how well non-common-path errors are calibrated. However, given the mask's linear geometry, we are able to conduct additional unique science observations. Since the mask does not suffer from pointing errors down its long axis, it can suppress the light from two different stars simultaneously, such as the individual components of a spatially resolved binary star system, and search for faint tertiary companions. In this paper, we present the design of the mask, the science motivation for targeting binary stars, and our preliminary results, including the detection of a candidate M-dwarf tertiary companion orbiting the visual binary star HIP 48337, which we are continuing to monitor with astrometry to determine its association.
Linear calculations of edge current driven kink modes with BOUT++ code
Energy Technology Data Exchange (ETDEWEB)
Li, G. Q., E-mail: ligq@ipp.ac.cn; Xia, T. Y. [Institute of Plasma Physics, CAS, Hefei, Anhui 230031 (China); Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Xu, X. Q. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Snyder, P. B.; Turnbull, A. D. [General Atomics, San Diego, California 92186 (United States); Ma, C. H.; Xi, P. W. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); FSC, School of Physics, Peking University, Beijing 100871 (China)
2014-10-15
This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linear growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density.
Bounded distance decoding of linear error-correcting codes with Gröbner bases
Bulygin, S.; Pellikaan, G.R.
2009-01-01
The problem of bounded distance decoding of arbitrary linear codes using Gröbner bases is addressed. A new method is proposed, which is based on reducing an initial decoding problem to solving a certain system of polynomial equations over a finite field. The peculiarity of this system is that, when
Linear calculations of edge current driven kink modes with BOUT++ code
International Nuclear Information System (INIS)
Li, G. Q.; Xia, T. Y.; Xu, X. Q.; Snyder, P. B.; Turnbull, A. D.; Ma, C. H.; Xi, P. W.
2014-01-01
This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linear growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density
Throughput vs. Delay in Lossy Wireless Mesh Networks with Random Linear Network Coding
DEFF Research Database (Denmark)
Hundebøll, Martin; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani
2014-01-01
This work proposes a new protocol applying on– the–fly random linear network coding in wireless mesh net- works. The protocol provides increased reliability, low delay, and high throughput to the upper layers, while being oblivious to their specific requirements. This seemingly conflicting goals ...
Solving linear systems in FLICA-4, thermohydraulic code for 3-D transient computations
International Nuclear Information System (INIS)
Allaire, G.
1995-01-01
FLICA-4 is a computer code, developed at the CEA (France), devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores, for small size problems (around 100 mesh cells) as well as for large ones (more than 100000), on, either standard workstations or vector super-computers. As for time implicit codes, the largest time and memory consuming part of FLICA-4 is the routine dedicated to solve the linear system (the size of which is of the order of the number of cells). Therefore, the efficiency of the code is crucially influenced by the optimization of the algorithms used in assembling and solving linear systems: direct methods as the Gauss (or LU) decomposition for moderate size problems, iterative methods as the preconditioned conjugate gradient for large problems. 6 figs., 13 refs
FFT Algorithm for Binary Extension Finite Fields and Its Application to Reed–Solomon Codes
Lin, Sian Jheng; Al-Naffouri, Tareq Y.; Han, Yunghsiang S.
2016-01-01
Recently, a new polynomial basis over binary extension fields was proposed, such that the fast Fourier transform (FFT) over such fields can be computed in the complexity of order O(n lg(n)), where n is the number of points evaluated in FFT
Non-binary coded modulation for FMF-based coherent optical transport networks
Lin, Changyu
The Internet has fundamentally changed the way of modern communication. Current trends indicate that high-capacity demands are not going to be saturated anytime soon. From Shannon's theory, we know that information capacity is a logarithmic function of signal-to-noise ratio (SNR), but a linear function of the number of dimensions. Ideally, we can increase the capacity by increasing the launch power, however, due to the nonlinear characteristics of silica optical fibers that imposes a constraint on the maximum achievable optical-signal-to-noise ratio (OSNR). So there exists a nonlinear capacity limit on the standard single mode fiber (SSMF). In order to satisfy never ending capacity demands, there are several attempts to employ additional degrees of freedom in transmission system, such as few-mode fibers (FMFs), which can dramatically improve the spectral efficiency. On the other hand, for the given physical links and network equipment, an effective solution to relax the OSNR requirement is based on forward error correction (FEC), as the response to the demands of high speed reliable transmission. In this dissertation, we first discuss the model of FMF with nonlinear effects considered. Secondly, we simulate the FMF based OFDM system with various compensation and modulation schemes. Thirdly, we propose tandem-turbo-product nonbinary byte-interleaved coded modulation (BICM) for next-generation high-speed optical transmission systems. Fourthly, we study the Q factor and mutual information as threshold in BICM scheme. Lastly, an experimental study of the limits of nonlinearity compensation with digital signal processing has been conducted.
DEFF Research Database (Denmark)
Fitzek, Frank; Toth, Tamas; Szabados, Áron
2014-01-01
This paper advocates the use of random linear network coding for storage in distributed clouds in order to reduce storage and traffic costs in dynamic settings, i.e. when adding and removing numerous storage devices/clouds on-the-fly and when the number of reachable clouds is limited. We introduce...... various network coding approaches that trade-off reliability, storage and traffic costs, and system complexity relying on probabilistic recoding for cloud regeneration. We compare these approaches with other approaches based on data replication and Reed-Solomon codes. A simulator has been developed...... to carry out a thorough performance evaluation of the various approaches when relying on different system settings, e.g., finite fields, and network/storage conditions, e.g., storage space used per cloud, limited network use, and limited recoding capabilities. In contrast to standard coding approaches, our...
On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes
DEFF Research Database (Denmark)
Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde
2013-01-01
We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We gi...... and a duality in the choice of parameters needed for decoding, both in the case of GRS codes and in the case of Goppa codes....
Directory of Open Access Journals (Sweden)
J. Mutwil
2009-07-01
Full Text Available Shrinkage phenomena during solidification and cooling of hypereutectic aluminium-silicon alloys (AlSi18, AlSi21 have been examined. A vertical shrinkage rod casting with circular cross-section (constant or fixed: tapered has been used as a test sample. Two type of experiments have been conducted: 1 on development of the test sample linear dimension changes (linear expansion/contraction, 2 on development of shrinkage stresses in the test sample. By the linear contraction experiments the linear dimension changes of the test sample and the metal test mould as well a temperature in six points of the test sample have been registered. By shrinkage stresses examination a shrinkage tension force and linear dimension changes of the test sample as well a temperature in three points of the test sample have been registered. Registered time dependences of the test bar and the test mould linear dimension changes have shown, that so-called pre-shrinkage extension has been mainly by mould thermal extension caused. The investigation results have shown that both: the linear contraction as well as the shrinkage stresses development are evident dependent on metal temperature in a warmest region the sample (thermal centre.
Comparison of the nuclear code systems LINEAR-RECENT-NJOY and NJOY
International Nuclear Information System (INIS)
Seehusen, J.
1983-07-01
The reconstructed cross sections of the code systems LINEAR-RECENT-GROUPIE (Version 1982) and NJOY (Version 1982) have been compared for several materials. Some fuel cycle isotopes and structural materials of the ENDF/B-4 general purpose and ENDF/B-5 dosimetry files have been choosen. The reconstructed total, capture and fission cross sections calculated by LINEAR-RECENT and NJOY have been analized. The two sets of pointwise cross sections differ significantly. Another disagreement was found in the transformation of ENDF/B-4 and 5 files into data with a linear interpolation scheme. Unshielded multigroup constants at O 0 K (620 groups, SANDII) have been calculated by the three code systems LINEAR-RECENT-GROUPIE, NJOY and RESEND5-INTEND. The code system RESEND5-INTEND calculates wrong group constants and should not be used any more. The two sets of group constants obtained from ENDF/B-4 data using GROUPIE and NJOY differ for some group constants by more than 2%. Some disagreements at low energies (10 -3 -eV) of the total cross section of Na-23 and Al-27 are difficult to understand. For ENDF/B-5 dosimetry data the capture group constants differ significantly. (Author) [pt
DEFF Research Database (Denmark)
Christensen, M. G.; Jensen, Søren Holdt
2006-01-01
A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...
A new approach of binary addition and subtraction by non-linear ...
Indian Academy of Sciences (India)
optical domain by exploitation of proper non-linear material-based switching technique. In this communication, the authors extend this technique for both adder and subtractor accommodating the spatial input encoding system.
International Nuclear Information System (INIS)
Kirillov, Igor R.; Obukhov, Denis M.; Ogorodnikov, Anatoly P.; Araseki, Hideo
2004-01-01
The paper describes and compares three computer codes that are able to estimate the double-supply-frequency (DSF) pulsations in annular linear induction pumps (ALIPs). The DSF pulsations are the result of interaction of the magnetic field and induced in liquid metal currents both changing with supply-frequency. They may be of some concern for electromagnetic pumps (EMP) exploitation and need to be evaluated at their design. The results of computer simulation are compared with experimental ones for annular linear induction pump ALIP-1
Development of a 3D non-linear implicit MHD code
International Nuclear Information System (INIS)
Nicolas, T.; Ichiguchi, K.
2016-06-01
This paper details the on-going development of a 3D non-linear implicit MHD code, which aims at making possible large scale simulations of the non-linear phase of the interchange mode. The goal of the paper is to explain the rationale behind the choices made along the development, and the technical difficulties encountered. At the present stage, the development of the code has not been completed yet. Most of the discussion is concerned with the first approach, which utilizes cartesian coordinates in the poloidal plane. This approach shows serious difficulties in writing the preconditioner, closely related to the choice of coordinates. A second approach, based on curvilinear coordinates, also faced significant difficulties, which are detailed. The third and last approach explored involves unstructured tetrahedral grids, and indicates the possibility to solve the problem. The issue to domain meshing is addressed. (author)
New approach to derive linear power/burnup history input for CANDU fuel codes
International Nuclear Information System (INIS)
Lac Tang, T.; Richards, M.; Parent, G.
2003-01-01
The fuel element linear power / burnup history is a required input for the ELESTRES code in order to simulate CANDU fuel behavior during normal operating conditions and also to provide input for the accident analysis codes ELOCA and SOURCE. The purpose of this paper is to present a new approach to derive 'true', or at least more realistic linear power / burnup histories. Such an approach can be used to recreate any typical bundle power history if only a single pair of instantaneous values of bundle power and burnup, together with the position in the channel, are known. The histories obtained could be useful to perform more realistic simulations for safety analyses for cases where the reference (overpower) history is not appropriate. (author)
Throughput vs. Delay in Lossy Wireless Mesh Networks with Random Linear Network Coding
Hundebøll, Martin; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank
2014-01-01
This work proposes a new protocol applying on–the–fly random linear network coding in wireless mesh net-works. The protocol provides increased reliability, low delay,and high throughput to the upper layers, while being obliviousto their specific requirements. This seemingly conflicting goalsare achieved by design, using an on–the–fly network codingstrategy. Our protocol also exploits relay nodes to increasethe overall performance of individual links. Since our protocolnaturally masks random p...
Rate-compatible protograph LDPC code families with linear minimum distance
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.
Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C
International Nuclear Information System (INIS)
Sheikh, N.M.; Usman, S.R.; Fatima, S.
2002-01-01
Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)
FEAST: a two-dimensional non-linear finite element code for calculating stresses
International Nuclear Information System (INIS)
Tayal, M.
1986-06-01
The computer code FEAST calculates stresses, strains, and displacements. The code is two-dimensional. That is, either plane or axisymmetric calculations can be done. The code models elastic, plastic, creep, and thermal strains and stresses. Cracking can also be simulated. The finite element method is used to solve equations describing the following fundamental laws of mechanics: equilibrium; compatibility; constitutive relations; yield criterion; and flow rule. FEAST combines several unique features that permit large time-steps in even severely non-linear situations. The features include a special formulation for permitting many finite elements to simultaneously cross the boundary from elastic to plastic behaviour; accomodation of large drops in yield-strength due to changes in local temperature and a three-step predictor-corrector method for plastic analyses. These features reduce computing costs. Comparisons against twenty analytical solutions and against experimental measurements show that predictions of FEAST are generally accurate to ± 5%
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
Simulating the performance of a distance-3 surface code in a linear ion trap
Trout, Colin J.; Li, Muyuan; Gutiérrez, Mauricio; Wu, Yukai; Wang, Sheng-Tao; Duan, Luming; Brown, Kenneth R.
2018-04-01
We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of ≥99.9% for the logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis of the error subsets from the importance sampling method used to bound the logical error rates to gain insight into which error sources are particularly detrimental to error correction.
A Linear Algebra Framework for Static High Performance Fortran Code Distribution
Directory of Open Access Journals (Sweden)
Corinne Ancourt
1997-01-01
Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.
Analysis and Optimization of Sparse Random Linear Network Coding for Reliable Multicast Services
DEFF Research Database (Denmark)
Tassi, Andrea; Chatzigeorgiou, Ioannis; Roetter, Daniel Enrique Lucani
2016-01-01
Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different random linear network coding (RLNC......) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC...... techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet...
Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines
International Nuclear Information System (INIS)
Batygin, Y.
2004-01-01
A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented
Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines
Energy Technology Data Exchange (ETDEWEB)
Batygin, Y.
2004-10-28
A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented.
Computer codes for three dimensional mass transport with non-linear sorption
International Nuclear Information System (INIS)
Noy, D.J.
1985-03-01
The report describes the mathematical background and data input to finite element programs for three dimensional mass transport in a porous medium. The transport equations are developed and sorption processes are included in a general way so that non-linear equilibrium relations can be introduced. The programs are described and a guide given to the construction of the required input data sets. Concluding remarks indicate that the calculations require substantial computer resources and suggest that comprehensive preliminary analysis with lower dimensional codes would be important in the assessment of field data. (author)
Further development of the V-code for recirculating linear accelerator simulations
Energy Technology Data Exchange (ETDEWEB)
Franke, Sylvain; Ackermann, Wolfgang; Weiland, Thomas [Institut fuer Theorie Elektromagnetischer Felder, Technische Universitaet Darmstadt (Germany); Eichhorn, Ralf; Hug, Florian; Kleinmann, Michaela; Platz, Markus [Institut fuer Kernphysik, Technische Universitaet Darmstadt (Germany)
2011-07-01
The Superconducting Darmstaedter LINear Accelerator (S-DALINAC) installed at the institute of nuclear physics (IKP) at TU Darmstadt is designed as a recirculating linear accelerator. The beam is first accelerated up to 10 MeV in the injector beam line. Then it is deflected by 180 degrees into the main linac. The linac section with eight superconducting cavities is passed up to three times, providing a maximal energy gain of 40 MeV on each passage. Due to this recirculating layout it is complicated to find an accurate setup for the various beam line elements. Fast online beam dynamics simulations can advantageously assist the operators because they provide a more detailed insight into the actual machine status. In this contribution further developments of the moment based simulation tool V-code which enables to simulate recirculating machines are presented together with simulation results.
International Nuclear Information System (INIS)
Burke, B. J.; Kruger, S. E.; Hegna, C. C.; Zhu, P.; Snyder, P. B.; Sovinec, C. R.; Howell, E. C.
2010-01-01
A linear benchmark between the linear ideal MHD stability codes ELITE [H. R. Wilson et al., Phys. Plasmas 9, 1277 (2002)], GATO [L. Bernard et al., Comput. Phys. Commun. 24, 377 (1981)], and the extended nonlinear magnetohydrodynamic (MHD) code, NIMROD [C. R. Sovinec et al.., J. Comput. Phys. 195, 355 (2004)] is undertaken for edge-localized (MHD) instabilities. Two ballooning-unstable, shifted-circle tokamak equilibria are compared where the stability characteristics are varied by changing the equilibrium plasma profiles. The equilibria model an H-mode plasma with a pedestal pressure profile and parallel edge currents. For both equilibria, NIMROD accurately reproduces the transition to instability (the marginally unstable mode), as well as the ideal growth spectrum for a large range of toroidal modes (n=1-20). The results use the compressible MHD model and depend on a precise representation of 'ideal-like' and 'vacuumlike' or 'halo' regions within the code. The halo region is modeled by the introduction of a Lundquist-value profile that transitions from a large to a small value at a flux surface location outside of the pedestal region. To model an ideal-like MHD response in the core and a vacuumlike response outside the transition, separate criteria on the plasma and halo Lundquist values are required. For the benchmarked equilibria the critical Lundquist values are 10 8 and 10 3 for the ideal-like and halo regions, respectively. Notably, this gives a ratio on the order of 10 5 , which is much larger than experimentally measured values using T e values associated with the top of the pedestal and separatrix. Excellent agreement with ELITE and GATO calculations are made when sharp boundary transitions in the resistivity are used and a small amount of physical dissipation is added for conditions very near and below marginal ideal stability.
Iterative linear solvers in a 2D radiation-hydrodynamics code: Methods and performance
International Nuclear Information System (INIS)
Baldwin, C.; Brown, P.N.; Falgout, R.; Graziani, F.; Jones, J.
1999-01-01
Computer codes containing both hydrodynamics and radiation play a central role in simulating both astrophysical and inertial confinement fusion (ICF) phenomena. A crucial aspect of these codes is that they require an implicit solution of the radiation diffusion equations. The authors present in this paper the results of a comparison of five different linear solvers on a range of complex radiation and radiation-hydrodynamics problems. The linear solvers used are diagonally scaled conjugate gradient, GMRES with incomplete LU preconditioning, conjugate gradient with incomplete Cholesky preconditioning, multigrid, and multigrid-preconditioned conjugate gradient. These problems involve shock propagation, opacities varying over 5--6 orders of magnitude, tabular equations of state, and dynamic ALE (Arbitrary Lagrangian Eulerian) meshes. They perform a problem size scalability study by comparing linear solver performance over a wide range of problem sizes from 1,000 to 100,000 zones. The fundamental question they address in this paper is: Is it more efficient to invert the matrix in many inexpensive steps (like diagonally scaled conjugate gradient) or in fewer expensive steps (like multigrid)? In addition, what is the answer to this question as a function of problem size and is the answer problem dependent? They find that the diagonally scaled conjugate gradient method performs poorly with the growth of problem size, increasing in both iteration count and overall CPU time with the size of the problem and also increasing for larger time steps. For all problems considered, the multigrid algorithms scale almost perfectly (i.e., the iteration count is approximately independent of problem size and problem time step). For pure radiation flow problems (i.e., no hydrodynamics), they see speedups in CPU time of factors of ∼15--30 for the largest problems, when comparing the multigrid solvers relative to diagonal scaled conjugate gradient
Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D
2018-01-15
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.
Koleva, Bojidarka B; Kolev, Tsonko M; Tsalev, Dimiter L; Spiteller, Michael
2008-01-22
Quantitative infrared (IR) and Raman spectroscopic approach for determination of phenacetin (Phen) and salophen (Salo) in binary solid mixtures with caffeine: phenacetin/caffeine (System 1) and salophen/caffeine (System 2) is presented. Absorbance ratios of 746 cm(-1) or 721 cm(-1) peaks (characteristic for each of determined compounds in the Systems 1 and 2) to 1509 cm(-1) and 1616 cm(-1) (attributed to Phen and Salo, respectively) were used. The IR spectroscopy gives confidence of 98.9% (System 1) and 98.3% (System 2), while the Raman spectroscopic data are with slightly higher confidence of 99.1% for both systems. The limits of detection for the compounds studied were 0.013 and 0.012 mole fraction for IR and Raman methods, respectively. Solid-state linear dichroic infrared (IR-LD) spectral analysis of solid mixtures was carried out with a view to obtaining experimental IR spectroscopic assignment of the characteristic IR bands of both determined compounds. The orientation technique as a nematic liquid crystal suspension was used, combined with the so-called reducing-difference procedure for polarized spectra interpretation. The possibility for obtaining supramolecular stereo structural information for Phen and Salo by comparing spectroscopic and crystallographic data has also been shown. An independent high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) analysis was performed for comparison and validation of vibrational spectroscopy data. Applications to 10 tablets of commercial products APC and Sedalgin are given.
Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.
Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo
2015-08-01
Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.
Why a New Code for Novae Evolution and Mass Transfer in Binaries?
Directory of Open Access Journals (Sweden)
G. Shaviv
2015-02-01
Full Text Available One of the most interesting problems in Cataclysmic Variables is the long time scale evolution. This problem appears in long time evolution, which is also very important in the search for the progenitor of SN Ia. The classical approach to overcome this problem in the simulation of novae evolution is to assume: (1 A constant in time, rate of mass transfer. (2 The mass transfer rate that does not vary throughout the life time of the nova, even when many eruptions are considered. Here we show that these assumptions are valid only for a single thermonuclear flash and such a calculation cannot be the basis for extrapolation of the behavior over many flashes. In particular, such calculation cannot be used to predict under what conditions an accreting WD may reach the Chandrasekhar mass and collapse. We report on a new code to attack this problem. The basic idea is to create two parallel processes, one calculating the mass losing star and the other the accreting white dwarf. The two processes communicate continuously with each other and follow the time depended mass loss.
Borges, J.
2014-01-01
A binary linear code C is a Z2-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the Z2[x]-module Z2[x]/(x^r − 1) × Z2[x]/(x^s − 1). We determine the structure of Z2-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of Z2-double cyclic codes and its duals, and the relation...
Random Linear Network Coding is Key to Data Survival in Highly Dynamic Distributed Storage
DEFF Research Database (Denmark)
Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani
2015-01-01
Distributed storage solutions have become widespread due to their ability to store large amounts of data reliably across a network of unreliable nodes, by employing repair mechanisms to prevent data loss. Conventional systems rely on static designs with a central control entity to oversee...... and control the repair process. Given the large costs for maintaining and cooling large data centers, our work proposes and studies the feasibility of a fully decentralized systems that can store data even on unreliable and, sometimes, unavailable mobile devices. This imposes new challenges on the design...... as the number of available nodes varies greatly over time and keeping track of the system's state becomes unfeasible. As a consequence, conventional erasure correction approaches are ill-suited for maintaining data integrity. In this highly dynamic context, random linear network coding (RLNC) provides...
International Nuclear Information System (INIS)
Lundsager, P.; Krenk, S.
1975-08-01
The static and dynamic response of a cylindrical/ spherical containment to a Boeing 720 impact is computed using 3 different linear elastic computer codes: FINEL, SAP and STARDYNE. Stress and displacement fields are shown together with time histories for a point in the impact zone. The main conclusions from this study are: - In this case the maximum dynamic load factors for stress and displacements were close to 1, but a static analysis alone is not fully sufficient. - More realistic load time histories should be considered. - The main effects seem to be local. The present study does not indicate general collapse from elastic stresses alone. - Further study of material properties at high rates is needed. (author)
QR code-based non-linear image encryption using Shearlet transform and spiral phase transform
Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan
2018-02-01
In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.
International Nuclear Information System (INIS)
Mota, F.; Ortiz, C. J.; Vila, R.
2012-01-01
Irradiation Experimental Area of TechnoFusion will emulate the extreme irradiation fusion conditions in materials by means of three ion accelerators: one used for self-implanting heavy ions (Fe, Si, C,...) to emulate the displacement damage induced by fusion neutrons and the other two for light ions (H and He) to emulate the transmutation induced by fusion neutrons. This Laboratory will play an essential role in the selection of functional materials for DEMO reactor since it will allow reproducing the effects of neutron radiation on fusion materials. Ion irradiation produces little or no residual radioactivity, allowing handling of samples without the need for special precautions. Currently, two different methods are used to calculate the primary displacement damage by neutron irradiation or by ion irradiation. On one hand, the displacement damage doses induced by neutrons are calculated considering the NRT model based on the electronic screening theory of Linhard. This methodology is commonly used since 1975. On the other hand, for experimental research community the SRIM code is commonly used to calculate the primary displacement damage dose induced by ion irradiation. Therefore, both methodologies of primary displacement damage calculation have nothing in common. However, if we want to design ion irradiation experiments capable to emulate the neutron fusion effect in materials, it is necessary to develop comparable methodologies of damage calculation for both kinds of radiation. It would allow us to define better the ion irradiation parameters (Ion, current, Ion energy, dose, etc) required to emulate a specific neutron irradiation environment. Therefore, our main objective was to find the way to calculate the primary displacement damage induced by neutron irradiation and by ion irradiation starting from the same point, that is, the PKA spectrum. In order to emulate the neutron irradiation that would prevail under fusion conditions, two approaches are contemplated: a) on
Modified linear predictive coding approach for moving target tracking by Doppler radar
Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao
2016-07-01
Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.
Hu, Jun; Hadid, Hamda Ben; Henry, Daniel; Mojtabi, Abdelkader
Temporal and spatio-temporal instabilities of binary liquid films flowing down an inclined uniformly heated plate with Soret effect are investigated by using the Chebyshev collocation method to solve the full system of linear stability equations. Seven dimensionless parameters, i.e. the Kapitza, Galileo, Prandtl, Lewis, Soret, Marangoni, and Biot numbers (Ka, G, Pr, L, ) are used to control the flow system. In the case of pure spanwise perturbations, thermocapillary S- and P-modes are obtained. It is found that the most dangerous modes are stationary for positive Soret numbers (0), and oscillatory for =0 remains so for >0 and even merges with the long-wave S-mode. In the case of streamwise perturbations, a long-wave surface mode (H-mode) is also obtained. From the neutral curves, it is found that larger Soret numbers make the film flow more unstable as do larger Marangoni numbers. The increase of these parameters leads to the merging of the long-wave H- and S-modes, making the situation long-wave unstable for any Galileo number. It also strongly influences the short-wave P-mode which becomes the most critical for large enough Galileo numbers. Furthermore, from the boundary curves between absolute and convective instabilities (AI/CI) calculated for both the long-wave instability (S- and H-modes) and the short-wave instability (P-mode), it is shown that for small Galileo numbers the AI/CI boundary curves are determined by the long-wave instability, while for large Galileo numbers they are determined by the short-wave instability.
International Nuclear Information System (INIS)
Lu, Peng; Zhou, Jianzhong; Wang, Chao; Qiao, Qi; Mo, Li
2015-01-01
Highlights: • STHGS problem is decomposed into two parallel sub-problems of UC and ELD. • Binary coded BCO is used to solve UC sub-problem with 0–1 discrete variables. • Real coded BCO is used to solve ELD sub-problem with continuous variables. • Some heuristic repairing strategies are designed to handle various constraints. • The STHGS of Xiluodu and Xiangjiaba cascade stations is solved by IB-RBCO. - Abstract: Short-term hydro generation scheduling (STHGS) of cascade hydropower stations is a typical nonlinear mixed integer optimization problem to minimize the total water consumption while simultaneously meeting the grid requirements and other hydraulic and electrical constraints. In this paper, STHGS problem is decomposed into two parallel sub-problems of unit commitment (UC) and economic load dispatch (ELD), and the methodology of improved binary-real coded bee colony optimization (IB-RBCO) algorithm is proposed to solve them. Firstly, the improved binary coded BCO is used to solve the UC sub-problem with 0–1 discrete variables, and the heuristic repairing strategy for unit state constrains is applied to generate the feasible unit commitment schedule. Then, the improved real coded BCO is used to solve the ELD sub-problem with continuous variables, and an effective method is introduced to handle various unit operation constraints. Especially, the new updating strategy of DE/best/2/bin method with dynamic parameter control mechanism is applied to real coded BCO to improve the search ability of IB-RBCO. Finally, to verify the feasibility and effectiveness of the proposed IB-RBCO method, it is applied to solve the STHGS problem of Xiluodu and Xiangjiaba cascaded hydropower stations, and the simulating results are compared with other intelligence algorithms. The simulation results demonstrate that the proposed IB-RBCO method can get higher-quality solutions with less water consumption and shorter calculating time when facing the complex STHGS problem
Xiong, Chenrong; Yan, Zhiyuan
2014-10-01
Non-binary low-density parity-check (LDPC) codes have some advantages over their binary counterparts, but unfortunately their decoding complexity is a significant challenge. The iterative hard- and soft-reliability based majority-logic decoding algorithms are attractive for non-binary LDPC codes, since they involve only finite field additions and multiplications as well as integer operations and hence have significantly lower complexity than other algorithms. In this paper, we propose two improvements to the majority-logic decoding algorithms. Instead of the accumulation of reliability information in the existing majority-logic decoding algorithms, our first improvement is a new reliability information update. The new update not only results in better error performance and fewer iterations on average, but also further reduces computational complexity. Since existing majority-logic decoding algorithms tend to have a high error floor for codes whose parity check matrices have low column weights, our second improvement is a re-selection scheme, which leads to much lower error floors, at the expense of more finite field operations and integer operations, by identifying periodic points, re-selecting intermediate hard decisions, and changing reliability information.
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
Directory of Open Access Journals (Sweden)
Hazlehurst Benny
2014-03-01
Full Text Available This paper offers a critique of the ‘binary’ nature of much biblical interpretation and ethical belief in the Church, rejecting simplistic ‘either-or’ approaches to both. Instead there is offered an interpretation of key biblical texts through the lenses of circumstances, needs and motivation. It is argued that, when these factors are taken into account, even for Evangelicals, there is no longer a substantive biblical case against the acceptance of faithful, loving same-sex partnerships and the development of a positive Christian ethic for lesbian, gay, bisexual and transgender people. At the very least, the complexity of the interpretive task must lead to greater openness to and acceptance of those from whom we differ.
Mendez, Rene A.; Claveria, Ruben M.; Orchard, Marcos E.; Silva, Jorge F.
2017-11-01
We present orbital elements and mass sums for 18 visual binary stars of spectral types B to K (five of which are new orbits) with periods ranging from 20 to more than 500 yr. For two double-line spectroscopic binaries with no previous orbits, the individual component masses, using combined astrometric and radial velocity data, have a formal uncertainty of ˜ 0.1 {M}⊙ . Adopting published photometry and trigonometric parallaxes, plus our own measurements, we place these objects on an H-R diagram and discuss their evolutionary status. These objects are part of a survey to characterize the binary population of stars in the Southern Hemisphere using the SOAR 4 m telescope+HRCAM at CTIO. Orbital elements are computed using a newly developed Markov chain Monte Carlo (MCMC) algorithm that delivers maximum-likelihood estimates of the parameters, as well as posterior probability density functions that allow us to evaluate the uncertainty of our derived parameters in a robust way. For spectroscopic binaries, using our approach, it is possible to derive a self-consistent parallax for the system from the combined astrometric and radial velocity data (“orbital parallax”), which compares well with the trigonometric parallaxes. We also present a mathematical formalism that allows a dimensionality reduction of the feature space from seven to three search parameters (or from 10 to seven dimensions—including parallax—in the case of spectroscopic binaries with astrometric data), which makes it possible to explore a smaller number of parameters in each case, improving the computational efficiency of our MCMC code. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
Linear-time non-malleable codes in the bit-wise independent tampering model
R.J.F. Cramer (Ronald); I.B. Damgård (Ivan); N.M. Döttling (Nico); I. Giacomelli (Irene); C. Xing (Chaoping)
2017-01-01
textabstractNon-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m′
Polynomial weights and code constructions
DEFF Research Database (Denmark)
Massey, J; Costello, D; Justesen, Jørn
1973-01-01
polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...
Biancalani, A.; Bottino, A.; Ehrlacher, C.; Grandgirard, V.; Merlo, G.; Novikau, I.; Qiu, Z.; Sonnendrücker, E.; Garbet, X.; Görler, T.; Leerink, S.; Palermo, F.; Zarzoso, D.
2017-06-01
The linear properties of the geodesic acoustic modes (GAMs) in tokamaks are investigated by means of the comparison of analytical theory and gyrokinetic numerical simulations. The dependence on the value of the safety factor, finite-orbit-width of the ions in relation to the radial mode width, magnetic-flux-surface shaping, and electron/ion mass ratio are considered. Nonuniformities in the plasma profiles (such as density, temperature, and safety factor), electro-magnetic effects, collisions, and the presence of minority species are neglected. Also, only linear simulations are considered, focusing on the local dynamics. We use three different gyrokinetic codes: the Lagrangian (particle-in-cell) code ORB5, the Eulerian code GENE, and semi-Lagrangian code GYSELA. One of the main aims of this paper is to provide a detailed comparison of the numerical results and analytical theory, in the regimes where this is possible. This helps understanding better the behavior of the linear GAM dynamics in these different regimes, the behavior of the codes, which is crucial in the view of a future work where more physics is present, and the regimes of validity of each specific analytical dispersion relation.
Ishitani, Terry T.
2010-01-01
This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who…
Decoding linear error-correcting codes up to half the minimum distance with Gröbner bases
Bulygin, S.; Pellikaan, G.R.; Sala, M.; Mora, T.; Perret, L.; Sakata, S.; Traverso, C.
2009-01-01
In this short note we show how one can decode linear error-correcting codes up to half the minimum distance via solving a system of polynomial equations over a finite field. We also explicitly present the reduced Gröbner basis for the system considered.
Dattoli, Giuseppe
2005-01-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. A code devoted to the analysis of this type of problems should be fast and reliable: conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problem in accelerators. The extension of these method to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators implemented numerically in C++. We show that the integration procedure is capable of reproducing the onset of an instability and effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, parametric studies a...
Development of a relativistic Particle In Cell code PARTDYN for linear accelerator beam transport
Energy Technology Data Exchange (ETDEWEB)
Phadte, D., E-mail: deepraj@rrcat.gov.in [LPD, Raja Ramanna Centre for Advanced Technology, Indore 452013 (India); Patidar, C.B.; Pal, M.K. [MAASD, Raja Ramanna Centre for Advanced Technology, Indore (India)
2017-04-11
A relativistic Particle In Cell (PIC) code PARTDYN is developed for the beam dynamics simulation of z-continuous and bunched beams. The code is implemented in MATLAB using its MEX functionality which allows both ease of development as well higher performance similar to a compiled language like C. The beam dynamics calculations carried out by the code are compared with analytical results and with other well developed codes like PARMELA and BEAMPATH. The effect of finite number of simulation particles on the emittance growth of intense beams has been studied. Corrections to the RF cavity field expressions were incorporated in the code so that the fields could be calculated correctly. The deviations of the beam dynamics results between PARTDYN and BEAMPATH for a cavity driven in zero-mode have been discussed. The beam dynamics studies of the Low Energy Beam Transport (LEBT) using PARTDYN have been presented.
Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms
Directory of Open Access Journals (Sweden)
Ahmed Azouaoui
2012-01-01
Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.
LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources
Directory of Open Access Journals (Sweden)
Javier Garcia-Frias
2005-05-01
Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.
Holmes, Tyson H; Li, Shou-Hua; McCann, David J
2016-11-23
The design of pharmacological trials for management of substance use disorders is shifting toward outcomes of successful individual-level behavior (abstinence or no heavy use). While binary success/failure analyses are common, McCann and Li (CNS Neurosci Ther 2012; 18: 414-418) introduced "number of beyond-threshold weeks of success" (NOBWOS) scores to avoid dichotomized outcomes. NOBWOS scoring employs an efficacy "hurdle" with values reflecting duration of success. Here, we evaluate NOBWOS scores rigorously. Formal analysis of mathematical structure of NOBWOS scores is followed by simulation studies spanning diverse conditions to assess operating characteristics of five linear-rank tests on NOBWOS scores. Simulations include assessment of Fisher's exact test applied to hurdle component. On average, statistical power was approximately equal for five linear-rank tests. Under none of conditions examined did Fisher's exact test exhibit greater statistical power than any of the linear-rank tests. These linear-rank tests provide good Type I and Type II error control for comparing distributions of NOBWOS scores between groups (e.g. active vs. placebo). All methods were applied to re-analyses of data from four clinical trials of differing lengths and substances of abuse. These linear-rank tests agreed across all trials in rejecting (or not) their null (equality of distributions) at ≤ 0.05. © The Author(s) 2016.
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-01
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.
Directory of Open Access Journals (Sweden)
Paccaud Fred
2004-04-01
Full Text Available Abstract Background We sought to improve upon previously published statistical modeling strategies for binary classification of dyslipidemia for general population screening purposes based on the waist-to-hip circumference ratio and body mass index anthropometric measurements. Methods Study subjects were participants in WHO-MONICA population-based surveys conducted in two Swiss regions. Outcome variables were based on the total serum cholesterol to high density lipoprotein cholesterol ratio. The other potential predictor variables were gender, age, current cigarette smoking, and hypertension. The models investigated were: (i linear regression; (ii logistic classification; (iii regression trees; (iv classification trees (iii and iv are collectively known as "CART". Binary classification performance of the region-specific models was externally validated by classifying the subjects from the other region. Results Waist-to-hip circumference ratio and body mass index remained modest predictors of dyslipidemia. Correct classification rates for all models were 60–80%, with marked gender differences. Gender-specific models provided only small gains in classification. The external validations provided assurance about the stability of the models. Conclusions There were no striking differences between either the algebraic (i, ii vs. non-algebraic (iii, iv, or the regression (i, iii vs. classification (ii, iv modeling approaches. Anticipated advantages of the CART vs. simple additive linear and logistic models were less than expected in this particular application with a relatively small set of predictor variables. CART models may be more useful when considering main effects and interactions between larger sets of predictor variables.
National Research Council Canada - National Science Library
Yablonovitch, Eli
2000-01-01
.... The equipment purchased under this grant has permitted UCLA to purchase a number of broad-band optical components, including especially some unique code division multiplexing filters that permitted...
International Nuclear Information System (INIS)
Ramani, D.T.
1977-01-01
The 'INTRANS' system is a general purpose computer code, designed to perform linear and non-linear structural stress and deflection analysis of impacting or non-impacting nuclear reactor internals components coupled with reactor vessel, shield building and external as well as internal gapped spring support system. This paper describes in general a unique computational procedure for evaluating the dynamic response of reactor internals, descretised as beam and lumped mass structural system and subjected to external transient loads such as seismic and LOCA time-history forces. The computational procedure is outlined in the INTRANS code, which computes component flexibilities of a discrete lumped mass planar model of reactor internals by idealising an assemblage of finite elements consisting of linear elastic beams with bending, torsional and shear stiffnesses interacted with external or internal linear as well as non-linear multi-gapped spring support system. The method of analysis is based on the displacement method and the code uses the fourth-order Runge-Kutta numerical integration technique as a basis for solution of dynamic equilibrium equations of motion for the system. During the computing process, the dynamic response of each lumped mass is calculated at specific instant of time using well-known step-by-step procedure. At any instant of time then, the transient dynamic motions of the system are held stationary and based on the predicted motions and internal forces of the previous instant. From which complete response at any time-step of interest may then be computed. Using this iterative process, the relationship between motions and internal forces is satisfied step by step throughout the time interval
Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi
This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.
DEFF Research Database (Denmark)
Rahman, Imadur Mohamed; Marchetti, Nicola; Fitzek, Frank
2005-01-01
(SIC) receiver where the detection is done on subcarrier by sub-carrier basis based on both Zero Forcing (ZF) and Minimum Mean Square Error (MMSE) nulling criterion for the system. In terms of Frame Error Rate (FER), MMSE based SIC receiver performs better than all other receivers compared......In this work, we have analyzed a joint spatial diversity and multiplexing transmission structure for MIMO-OFDM system, where Orthogonal Space-Frequency Block Coding (OSFBC) is used across all spatial multiplexing branches. We have derived a BLAST-like non-linear Successive Interference Cancellation...... in this paper. We have found that a linear two-stage receiver for the proposed system [1] performs very close to the non-linear receiver studied in this work. Finally, we compared the system performance in spatially correlated scenario. It is found that higher amount of spatial correlation at the transmitter...
Fokker-Planck code for the quasi-linear absorption of electron cyclotron waves in a tokamak plasma
International Nuclear Information System (INIS)
Meyer, R.L.; Giruzzi, G.; Krivenski, V.
1986-01-01
We present the solution of the kinetic equation describing the quasi-linear evolution of the electron momentum distribution function under the influence of the electron cyclotron wave absorption. Coulomb collisions and the dc electric field in a tokamak plasma. The solution of the quasi-linear equation is obtained numerically using a two-dimensional initial value code following an ADI scheme. Most emphasis is given to the full non-linear and self-consistent problem, namely, the wave amplitude is evaluated at any instant and any point in space according to the actual damping. This is necessary since wave damping is a very sensitive function of the slope of the local momentum distribution function because the resonance condition relates the electron momentum to the location of wave energy deposition. (orig.)
International Nuclear Information System (INIS)
Toprak, A. Emre; Guelay, F. Guelten; Ruge, Peter
2008-01-01
Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 mx7.80 m = 127.90 m 2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher
Toprak, A. Emre; Gülay, F. Gülten; Ruge, Peter
2008-07-01
Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 m×7.80 m = 127.90 m2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher
DEFF Research Database (Denmark)
Hundebøll, Martin; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani
2014-01-01
This work studies the potential and impact of the FRANC network coding protocol for delivering high quality Dynamic Adaptive Streaming over HTTP (DASH) in wireless networks. Although DASH aims to tailor the video quality rate based on the available throughput to the destination, it relies...
Development of flow network analysis code for block type VHTR core by linear theory method
International Nuclear Information System (INIS)
Lee, J. H.; Yoon, S. J.; Park, J. W.; Park, G. C.
2012-01-01
VHTR (Very High Temperature Reactor) is high-efficiency nuclear reactor which is capable of generating hydrogen with high temperature of coolant. PMR (Prismatic Modular Reactor) type reactor consists of hexagonal prismatic fuel blocks and reflector blocks. The flow paths in the prismatic VHTR core consist of coolant holes, bypass gaps and cross gaps. Complicated flow paths are formed in the core since the coolant holes and bypass gap are connected by the cross gap. Distributed coolant was mixed in the core through the cross gap so that the flow characteristics could not be modeled as a simple parallel pipe system. It requires lot of effort and takes very long time to analyze the core flow with CFD analysis. Hence, it is important to develop the code for VHTR core flow which can predict the core flow distribution fast and accurate. In this study, steady state flow network analysis code is developed using flow network algorithm. Developed flow network analysis code was named as FLASH code and it was validated with the experimental data and CFD simulation results. (authors)
International Nuclear Information System (INIS)
Cummins, J.D.
1965-02-01
With several white noise sources the various transmission paths of a linear multivariable system may be determined simultaneously. This memorandum considers the restrictions on pseudo-random two state sequences to effect simultaneous identification of several transmission paths and the consequential rejection of cross-coupled signals in linear multivariable systems. The conditions for simultaneous identification are established by an example, which shows that the integration time required is large i.e. tends to infinity, as it does when white noise sources are used. (author)
Czech Academy of Sciences Publication Activity Database
Dragoescu, D.; Teodorescu, M.; Barhala, A.; Wichterle, Ivan
2003-01-01
Roč. 68, č. 7 (2003), s. 1175-1192 ISSN 0010-0765 R&D Projects: GA ČR GA104/03/1555 Institutional research plan: CEZ:AV0Z4072921 Keywords : group contribution model * thermodynamics * chloroalkanes-linear ketones Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.041, year: 2003
Validation of favor code linear elastic fracture solutions for finite-length flaw geometries
International Nuclear Information System (INIS)
Dickson, T.L.; Keeney, J.A.; Bryson, J.W.
1995-01-01
One of the current tasks within the US Nuclear Regulatory Commission (NRC)-funded Heavy Section Steel Technology Program (HSST) at Oak Ridge National Laboratory (ORNL) is the continuing development of the FAVOR (Fracture, analysis of Vessels: Oak Ridge) computer code. FAVOR performs structural integrity analyses of embrittled nuclear reactor pressure vessels (RPVs) with stainless steel cladding, to evaluate compliance with the applicable regulatory criteria. Since the initial release of FAVOR, the HSST program has continued to enhance the capabilities of the FAVOR code. ABAQUS, a nuclear quality assurance certified (NQA-1) general multidimensional finite element code with fracture mechanics capabilities, was used to generate a database of stress-intensity-factor influence coefficients (SIFICs) for a range of axially and circumferentially oriented semielliptical inner-surface flaw geometries applicable to RPVs with an internal radius (Ri) to wall thickness (w) ratio of 10. This database of SIRCs has been incorporated into a development version of FAVOR, providing it with the capability to perform deterministic and probabilistic fracture analyses of RPVs subjected to transients, such as pressurized thermal shock (PTS), for various flaw geometries. This paper discusses the SIFIC database, comparisons with other investigators, and some of the benchmark verification problem specifications and solutions
Directory of Open Access Journals (Sweden)
Ririn Kusumawati
2016-05-01
In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.
International Nuclear Information System (INIS)
Malatara, G.; Kappas, K.; Sphiris, N.
1994-01-01
In this work, a Monte Carlo EGS4 code was used to simulate radiation transport through linear accelerators to produce and score energy spectra and angular distributions of 6, 12, 15 and 25 MeV bremsstrahlung photons exiting from different accelerator treatment heads. The energy spectra was used as input for a convolution method program to calculate the tissue-maximum ratio in water. 100.000 histories are recorded in the scoring plane for each simulation. The validity of the Monte Carlo simulation and the precision to the calculated spectra have been verified experimentally and were in good agreement. We believe that the accurate simulation of the different components of the linear accelerator head is very important for the precision of the results. The results of the Monte Carlo and the Convolution Method can be compared with experimental data for verification and they are powerful and practical tools to generate accurate spectra and dosimetric data. (authors)
Non-linear heat transfer computer code by finite element method
International Nuclear Information System (INIS)
Nagato, Kotaro; Takikawa, Noboru
1977-01-01
The computer code THETA-2D for the calculation of temperature distribution by the two-dimensional finite element method was made for the analysis of heat transfer in a high temperature structure. Numerical experiment was performed for the numerical integration of the differential equation of heat conduction. The Runge-Kutta method of the numerical experiment produced an unstable solution. A stable solution was obtained by the β method with the β value of 0.35. In high temperature structures, the radiative heat transfer can not be neglected. To introduce a term of the radiative heat transfer, a functional neglecting the radiative heat transfer was derived at first. Then, the radiative term was added after the discretion by variation method. Five model calculations were carried out by the computer code. Calculation of steady heat conduction was performed. When estimated initial temperature is 1,000 degree C, reasonable heat blance was obtained. In case of steady-unsteady temperature calculation, the time integral by THETA-2D turned out to be under-estimation for enthalpy change. With a one-dimensional model, the temperature distribution in a structure, in which heat conductivity is dependent on temperature, was calculated. Calculation with a model which has a void inside was performed. Finally, model calculation for a complex system was carried out. (Kato, T.)
Collett, David
2002-01-01
INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
Optimal codes as Tanner codes with cyclic component codes
DEFF Research Database (Denmark)
Høholdt, Tom; Pinero, Fernando; Zeng, Peng
2014-01-01
In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Directory of Open Access Journals (Sweden)
Dongyul Lee
2014-01-01
Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Horiuchi, Toshiyuki; Watanabe, Jun; Suzuki, Yuta; Iwasaki, Jun-ya
2017-05-01
Two dimensional code marks are often used for the production management. In particular, in the production lines of liquid-crystal-display panels and others, data on fabrication processes such as production number and process conditions are written on each substrate or device in detail, and they are used for quality managements. For this reason, lithography system specialized in code mark printing is developed. However, conventional systems using lamp projection exposure or laser scan exposure are very expensive. Therefore, development of a low-cost exposure system using light emitting diodes (LEDs) and optical fibers with squared ends arrayed in a matrix is strongly expected. In the past research, feasibility of such a new exposure system was demonstrated using a handmade system equipped with 100 LEDs with a central wavelength of 405 nm, a 10×10 matrix of optical fibers with 1 mm square ends, and a 10X projection lens. Based on these progresses, a new method for fabricating large-scale arrays of finer fibers with squared ends was developed in this paper. At most 40 plastic optical fibers were arranged in a linear gap of an arraying instrument, and simultaneously squared by heating them on a hotplate at 120°C for 7 min. Fiber sizes were homogeneous within 496+/-4 μm. In addition, average light leak was improved from 34.4 to 21.3% by adopting the new method in place of conventional one by one squaring method. Square matrix arrays necessary for printing code marks will be obtained by piling the newly fabricated linear arrays up.
Kodo: An Open and Research Oriented Network Coding Library
DEFF Research Database (Denmark)
Pedersen, Morten Videbæk; Heide, Janus; Fitzek, Frank
2011-01-01
We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially a...
Cosmological N-body simulations with a tree code - Fluctuations in the linear and nonlinear regimes
International Nuclear Information System (INIS)
Suginohara, Tatsushi; Suto, Yasushi; Bouchet, F.R.; Hernquist, L.
1991-01-01
The evolution of gravitational systems is studied numerically in a cosmological context using a hierarchical tree algorithm with fully periodic boundary conditions. The simulations employ 262,144 particles, which are initially distributed according to scale-free power spectra. The subsequent evolution is followed in both flat and open universes. With this large number of particles, the discretized system can accurately model the linear phase. It is shown that the dynamics in the nonlinear regime depends on both the spectral index n and the density parameter Omega. In Omega = 1 universes, the evolution of the two-point correlation function Xi agrees well with similarity solutions for Xi greater than about 100 but its slope is steeper in open models with the same n. 28 refs
International Nuclear Information System (INIS)
Tayal, M.
1987-01-01
Structures often operate at elevated temperatures. Temperature calculations are needed so that the design can accommodate thermally induced stresses and material changes. A finite element computer called FEAT has been developed to calculate temperatures in solids of arbitrary shapes. FEAT solves the classical equation for steady state conduction of heat. The solution is obtained for two-dimensional (plane or axisymmetric) or for three-dimensional problems. Gap elements are use to simulate interfaces between neighbouring surfaces. The code can model: conduction; internal generation of heat; prescribed convection to a heat sink; prescribed temperatures at boundaries; prescribed heat fluxes on some surfaces; and temperature-dependence of material properties like thermal conductivity. The user has a option of specifying the detailed variation of thermal conductivity with temperature. For convenience to the nuclear fuel industry, the user can also opt for pre-coded values of thermal conductivity, which are obtained from the MATPRO data base (sponsored by the U.S. Nuclear Regulatory Commission). The finite element method makes FEAT versatile, and enables it to accurately accommodate complex geometries. The optional link to MATPRO makes it convenient for the nuclear fuel industry to use FEAT, without loss of generality. Special numerical techniques make the code inexpensive to run, for the type of material non-linearities often encounter in the analysis of nuclear fuel. The code, however, is general, and can be used for other components of the reactor, or even for non-nuclear systems. The predictions of FEAT have been compared against several analytical solutions. The agreement is usually better than 5%. Thermocouple measurements show that the FEAT predictions are consistent with measured changes in temperatures in simulated pressure tubes. FEAT was also found to predict well, the axial variations in temperatures in the end-pellets(UO 2 ) of two fuel elements irradiated
A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems
Directory of Open Access Journals (Sweden)
M. Benyoucef
2008-01-01
Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires 2NK2 floating point operations (flops, where N is the processing gain and K is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.
A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems
Directory of Open Access Journals (Sweden)
Benyoucef M
2007-01-01
Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires floating point operations (flops, where is the processing gain and is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.
International Nuclear Information System (INIS)
Vadlamani, Srinath; Kruger, Scott; Austin, Travis
2008-01-01
Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems. For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.
Energy Technology Data Exchange (ETDEWEB)
Malatara, G; Kappas, K [Medical Physics Department, Faculty of Medicine, University of Patras, 265 00 Patras (Greece); Sphiris, N [Ethnodata S.A., Athens (Greece)
1994-12-31
In this work, a Monte Carlo EGS4 code was used to simulate radiation transport through linear accelerators to produce and score energy spectra and angular distributions of 6, 12, 15 and 25 MeV bremsstrahlung photons exiting from different accelerator treatment heads. The energy spectra was used as input for a convolution method program to calculate the tissue-maximum ratio in water. 100.000 histories are recorded in the scoring plane for each simulation. The validity of the Monte Carlo simulation and the precision to the calculated spectra have been verified experimentally and were in good agreement. We believe that the accurate simulation of the different components of the linear accelerator head is very important for the precision of the results. The results of the Monte Carlo and the Convolution Method can be compared with experimental data for verification and they are powerful and practical tools to generate accurate spectra and dosimetric data. (authors). 10 refs,5 figs, 2 tabs.
International Nuclear Information System (INIS)
Eggleton, P.P.; Pringle, J.E.
1985-01-01
This volume contains 15 review articles in the field of binary stars. The subjects reviewed span considerably, from the shortest period of interacting binaries to the longest, symbiotic stars. Also included are articles on Algols, X-ray binaries and Wolf-Rayet stars (single and binary). Contents: Preface. List of Participants. Activity of Contact Binary Systems. Wolf-Rayet Stars and Binarity. Symbiotic Stars. Massive X-ray Binaries. Stars that go Hump in the Night: The SU UMa Stars. Interacting Binaries - Summing Up
The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2
Poole, Eugene L.; Overman, Andrea L.
1988-01-01
Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.
Binary Stochastic Representations for Large Multi-class Classification
Gerald, Thomas
2017-10-23
Classification with a large number of classes is a key problem in machine learning and corresponds to many real-world applications like tagging of images or textual documents in social networks. If one-vs-all methods usually reach top performance in this context, these approaches suffer of a high inference complexity, linear w.r.t. the number of categories. Different models based on the notion of binary codes have been proposed to overcome this limitation, achieving in a sublinear inference complexity. But they a priori need to decide which binary code to associate to which category before learning using more or less complex heuristics. We propose a new end-to-end model which aims at simultaneously learning to associate binary codes with categories, but also learning to map inputs to binary codes. This approach called Deep Stochastic Neural Codes (DSNC) keeps the sublinear inference complexity but do not need any a priori tuning. Experimental results on different datasets show the effectiveness of the approach w.r.t. baseline methods.
Introduction to generalized linear models
Dobson, Annette J
2008-01-01
Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...
Li, Chuan; Han, Lei; Ma, Chun-Wai; Lai, Suk-King; Lai, Chun-Hong; Shum, Daisy Kwok Yan; Chan, Ying-Shing
2013-07-01
Using sinusoidal oscillations of linear acceleration along both the horizontal and vertical planes to stimulate otolith organs in the inner ear, we charted the postnatal time at which responsive neurons in the rat inferior olive (IO) first showed Fos expression, an indicator of neuronal recruitment into the otolith circuit. Neurons in subnucleus dorsomedial cell column (DMCC) were activated by vertical stimulation as early as P9 and by horizontal (interaural) stimulation as early as P11. By P13, neurons in the β subnucleus of IO (IOβ) became responsive to horizontal stimulation along the interaural and antero-posterior directions. By P21, neurons in the rostral IOβ became also responsive to vertical stimulation, but those in the caudal IOβ remained responsive only to horizontal stimulation. Nearly all functionally activated neurons in DMCC and IOβ were immunopositive for the NR1 subunit of the NMDA receptor and the GluR2/3 subunit of the AMPA receptor. In situ hybridization studies further indicated abundant mRNA signals of the glutamate receptor subunits by the end of the second postnatal week. This is reinforced by whole-cell patch-clamp data in which glutamate receptor-mediated miniature excitatory postsynaptic currents of rostral IOβ neurons showed postnatal increase in amplitude, reaching the adult level by P14. Further, these neurons exhibited subthreshold oscillations in membrane potential as from P14. Taken together, our results support that ionotropic glutamate receptors in the IO enable postnatal coding of gravity-related information and that the rostral IOβ is the only IO subnucleus that encodes spatial orientations in 3-D.
DEFF Research Database (Denmark)
Tömösközi, Máté; Fitzek, Frank; Roetter, Daniel Enrique Lucani
2014-01-01
Video surveillance and similar real-time applications on wireless networks require increased reliability and high performance of the underlying transmission layer. Classical solutions, such as Reed-Solomon codes, increase the reliability, but typically have the negative side-effect of additional ...
Energy Technology Data Exchange (ETDEWEB)
Vallee, R L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1968-07-01
The study of binary groups under their mathematical aspects constitutes the matter of binary analysis, the purpose of which consists in developing altogether simple, rigorous and practical methods needed by the technicians, the engineers and all those who may be mainly concerned by digital processing. This subject, fast extending if not determining, however tends actually to play a main part in nuclear electronics as well as in several other research areas. (authors) [French] L'analyse binaire a pour objet l'etude mathematique des proprietes d'ensembles binaires algebriques et pour but l'elaboration de methodes simples, rigoureuses et pratiques, destinees aux techniciens, aux ingenieurs et a tous ceux qu'interesse directement le traitement numerique de l'information, discipline en expansion rapide qui, deja, en electronique nucleaire comme dans de nombreux autres domaines de la recherche, tend a jouer un role essentiel sinon determinant. (auteurs)
International Nuclear Information System (INIS)
Yokoyama, Kenji; Ishikawa, Makoto
2000-04-01
The linear heat rates of the power-to-melt (PTM) tests, performed with B5D-1 and B5D-2 subassemblies on the Experimental Fast Reactor 'JOYO', are evaluated with the continuous energy Monte-Carlo code, MVP. We can apply a whole core model to MVP, but it takes very long time for the calculation. Therefore, judging from the structure of B5D subassembly, we used the MVP code to calculate the radial distribution of linear heat rate and used the deterministic method to calculate the axial distribution. We also derived the formulas for this method. Furthermore, we evaluated the error of the linear heat rate, by evaluating the experimental error of the reactor power, the statistical error of Monte-Carlo method, the calculational model error of the deterministic method and so on. On the other hand, we also evaluated the burnup rate of the B5D assembly and compared with the measured value in the post-irradiation test. The main results are following: B5D-1 (B5101, F613632, core center). Linear heat rate: 600 W/cm±2.2%. Burnup rate: 0.977. B5D-2 (B5214, G80124, core center). Linear heat rate: 641 W/cm±2.2%. Burnup rate: 0.886. (author)
International Nuclear Information System (INIS)
Anaf, J.; Chalhoub, E.S.
1988-12-01
The NJOY and LINEAR/RECENT/GROUPIE calculational procedures for the resolved and unresolved resonance contributions and background cross sections are evaluated. Elastic scattering, fission and capture multigroup cross sections generated by these codes and the previously validated ETOG-3Q, ETOG-3, FLANGE-II and XLACS are compared. Constant weighting function and zero Kelvin temperature are considered. Discrepancies are presented and analysed. (author) [pt
International Nuclear Information System (INIS)
Anaf, J.; Chalhoub, E.S.
1991-01-01
The NJOY and LINEAR/RECENT/GROUPIE calculational procedures for the resolved and unresolved resonance contributions and background cross sections are evaluated. Elastic scattering, fission and capture multigroup cross sections generated by these codes and the previously validated ETOG-3Q, ETOG-3, FLANGE-II and XLACS are compared. Constant weighting function and zero Kelvin temperature are considered. Discrepancies are presented and analyzed. (author)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton
2014-11-11
We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.
Energy Technology Data Exchange (ETDEWEB)
Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain)], E-mail: mvilches@ugr.es; Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain); Guerrero, R. [Servicio de Radiofisica, Hospital Universitario ' San Cecilio' , Avda. Dr. Oloriz, 16, E-18012 Granada (Spain); Anguiano, M.; Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2007-09-21
When a therapeutic electron linear accelerator is simulated using a Monte Carlo (MC) code, the tuning of the initial spectra and the renormalization of dose (e.g., to maximum axial dose) constitute a common practice. As a result, very similar depth dose curves are obtained for different MC codes. However, if renormalization is turned off, the results obtained with the various codes disagree noticeably. The aim of this work is to investigate in detail the reasons of this disagreement. We have found that the observed differences are due to non-negligible differences in the angular scattering of the electron beam in very thin slabs of dense material (primary foil) and thick slabs of very low density material (air). To gain insight, the effects of the angular scattering models considered in various MC codes on the dose distribution in a water phantom are discussed using very simple geometrical configurations for the LINAC. The MC codes PENELOPE 2003, PENELOPE 2005, GEANT4, GEANT3, EGSnrc and MCNPX have been used.
International Nuclear Information System (INIS)
Vilches, M.; Garcia-Pareja, S.; Guerrero, R.; Anguiano, M.; Lallena, A.M.
2007-01-01
When a therapeutic electron linear accelerator is simulated using a Monte Carlo (MC) code, the tuning of the initial spectra and the renormalization of dose (e.g., to maximum axial dose) constitute a common practice. As a result, very similar depth dose curves are obtained for different MC codes. However, if renormalization is turned off, the results obtained with the various codes disagree noticeably. The aim of this work is to investigate in detail the reasons of this disagreement. We have found that the observed differences are due to non-negligible differences in the angular scattering of the electron beam in very thin slabs of dense material (primary foil) and thick slabs of very low density material (air). To gain insight, the effects of the angular scattering models considered in various MC codes on the dose distribution in a water phantom are discussed using very simple geometrical configurations for the LINAC. The MC codes PENELOPE 2003, PENELOPE 2005, GEANT4, GEANT3, EGSnrc and MCNPX have been used
Chang, Chau-Lyan
2003-01-01
During the past two decades, our understanding of laminar-turbulent transition flow physics has advanced significantly owing to, in a large part, the NASA program support such as the National Aerospace Plane (NASP), High-speed Civil Transport (HSCT), and Advanced Subsonic Technology (AST). Experimental, theoretical, as well as computational efforts on various issues such as receptivity and linear and nonlinear evolution of instability waves take part in broadening our knowledge base for this intricate flow phenomenon. Despite all these advances, transition prediction remains a nontrivial task for engineers due to the lack of a widely available, robust, and efficient prediction tool. The design and development of the LASTRAC code is aimed at providing one such engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. LASTRAC was written from scratch based on the state-of-the-art numerical methods for stability analysis and modem software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory (LST) or linear parabolized stability equations (LPSE) method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. Coupled with the built-in receptivity model that is currently under development, the nonlinear PSE method offers a synergistic approach to predict transition onset for a given disturbance environment based on first principles. This paper describes the governing equations, numerical methods, code development, and case studies for the current release of LASTRAC. Practical applications of LASTRAC are demonstrated for linear stability calculations, N-factor transition correlation, non-linear breakdown simulations, and controls of stationary crossflow instability in supersonic swept wing boundary
Shore, S N; van den Heuvel, EPJ
1994-01-01
This volume contains lecture notes presented at the 22nd Advanced Course of the Swiss Society for Astrophysics and Astronomy. The contributors deal with symbiotic stars, cataclysmic variables, massive binaries and X-ray binaries, in an attempt to provide a better understanding of stellar evolution.
Sze, Vivienne; Marpe, Detlev
2014-01-01
Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...
Generalized concatenated quantum codes
International Nuclear Information System (INIS)
Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei
2009-01-01
We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.
International Nuclear Information System (INIS)
Dattoli, G.; Schiavi, A.; Migliorati, M.
2006-03-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of this type of problems should be fast and reliable, conditions that are usually hardly achieved at the same rime. In the past, codes based on Lie algebraic techniques , have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators. We show that the integration procedure is capable of reproducing the onset of an instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed [it
International Nuclear Information System (INIS)
Dattoli, G.; Migliorati, M.; Schiavi, A.
2007-01-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed
Directory of Open Access Journals (Sweden)
M.A. Cremasco
2003-06-01
Full Text Available The simulated moving bed (SMB is potentially an economical method for the separation and purification of natural products because it is a continuous processes and can achieve higher productivity, higher product recovery, and higher purity than batch chromatographic processes. Despite the advantages of SMB, one of the challenges is to specify its zone flow rates and switching time. In this case it is possible to use the standing wave analysis. In this method, in a binary system, when certain concentration waves are confined to specific zones, high product purity and yield can be assured. Appropriate zone flow rates, zone lengths and step time are chosen to achieve standing waves. In this study the effects of selectivity on yield, throughput, solvent consumption, port switching time, and product purity for a binary system are analyzed. The results show that for a given selectivity the maximum throughput decreases with increasing yield, while solvent consumption and port switching time increase with increasing yield. To achieve the same purity and yield, a system with higher selectivity has a higher throughput and lower solvent consumption.
International Nuclear Information System (INIS)
Aspinall, J.
1982-01-01
A computational method was developed which alleviates the need for lengthy parametric scans as part of a design process. The method makes use of a least squares algorithm to find the optimal value of a parameter vector. Optimal is defined in terms of a utility function prescribed by the user. The placement of the vertical field coils of a torsatron is such a non linear problem
International Nuclear Information System (INIS)
Mar'yanov, B.M.; Zarubin, A.G.; Shumar, S.V.
2003-01-01
A method is proposed for the computer processing of curve of differential potentiometric titration of a binary mixture of heterovalent ions using precipitation reactions. The method is based on the transformation of the titration curve to segment-line characteristics, whose parameters (within the accuracy of the least-squares method) determine the sequence of the equivalence points and solubility products of the resulting precipitation. The method is applied to the titration of Ag(I)-Cd)II), Hg(II)-Te(IV), and Cd(II)-Te(IV) mixtures by a sodium diethyldithiocarbamate solution with membrane sulfide and glassy carbon indicator electrodes. For 4 to 11 mg of the analyte in 50 ml of the solution, RSD varies from 1 to 9% [ru
Noll, K. S.
2017-12-01
The Jupiter Trojans, in the context of giant planet migration models, can be thought of as an extension of the small body populations found beyond Neptune in the Kuiper Belt. Binaries are a distinctive feature of small body populations in the Kuiper Belt with an especially high fraction apparent among the brightest Cold Classicals. The binary fraction, relative sizes, and separations in the dynamically excited populations (Scattered, Resonant) reflects processes that may have eroded a more abundant initial population. This trend continues in the Centaurs and Trojans where few binaries have been found. We review new evidence including a third resolved Trojan binary and lightcurve studies to understand how the Trojans are related to the small body populations that originated in the outer protoplanetary disk.
Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding
DEFF Research Database (Denmark)
Hansen, Johan Peder
We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
Methodology for bus layout for topological quantum error correcting codes
Energy Technology Data Exchange (ETDEWEB)
Wosnitzka, Martin; Pedrocchi, Fabio L.; DiVincenzo, David P. [RWTH Aachen University, JARA Institute for Quantum Information, Aachen (Germany)
2016-12-15
Most quantum computing architectures can be realized as two-dimensional lattices of qubits that interact with each other. We take transmon qubits and transmission line resonators as promising candidates for qubits and couplers; we use them as basic building elements of a quantum code. We then propose a simple framework to determine the optimal experimental layout to realize quantum codes. We show that this engineering optimization problem can be reduced to the solution of standard binary linear programs. While solving such programs is a NP-hard problem, we propose a way to find scalable optimal architectures that require solving the linear program for a restricted number of qubits and couplers. We apply our methods to two celebrated quantum codes, namely the surface code and the Fibonacci code. (orig.)
Directory of Open Access Journals (Sweden)
Atamewoue Surdive
2017-12-01
Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.
International Nuclear Information System (INIS)
Le Thanh Xuan; Nguyen Thi Cam Thu; Tran Van Nghia; Truong Thi Hong Loan; Vo Thanh Nhon
2015-01-01
The dose distribution calculation is one of the major steps in radiotherapy. In this paper the Monte Carlo code MCNP5 has been applied for simulation 15 MV photon beams emitted from linear accelerator in a case of lung cancer of the General Hospital of Kien Giang. The settings for beam directions, field sizes and isocenter position used in MCNP5 must be the same as those in treatment plan at the hospital to ensure the results from MCNP5 are accurate. We also built a program CODIM by using MATLAB® programming software. This program was used to construct patient model from lung CT images obtained from cancer treatment cases at the General Hospital of Kien Giang and then MCNP5 code was used to simulate the delivered dose in the patient. The results from MCNP5 show that there is a difference of 5% in comparison with Prowess Panther program - a semi-empirical simulation program which is being used for treatment planning in the General Hospital of Kien Giang. The success of the work will help the planners to verify the patient dose distribution calculated from the treatment planning program being used at the hospital. (author)
DEFF Research Database (Denmark)
Keiding, Hans; Peleg, Bezalel
2006-01-01
is binary if it is rationalized by an acyclic binary relation. The foregoing result motivates our definition of a binary effectivity rule as the effectivity rule of some binary SCR. A binary SCR is regular if it satisfies unanimity, monotonicity, and independence of infeasible alternatives. A binary...
Energy Technology Data Exchange (ETDEWEB)
Delbecq, J.M
1999-07-01
The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)
Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.
Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang
2017-11-01
Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.
Eliciting Subjective Probabilities with Binary Lotteries
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd
objective probabilities. Drawing a sample from the same subject population, we find evidence that the binary lottery procedure induces linear utility in a subjective probability elicitation task using the Quadratic Scoring Rule. We also show that the binary lottery procedure can induce direct revelation...
International Nuclear Information System (INIS)
Corral B, J. R.
2015-01-01
Humans should avoid exposure to radiation, because the consequences are harmful to health. Although there are different emission sources of radiation, generated by medical devices they are usually of great interest, since people who attend hospitals are exposed in one way or another to ionizing radiation. Therefore, is important to conduct studies on radioactive levels that are generated in hospitals, as a result of the use of medical equipment. To determine levels of exposure speed of a radioactive facility there are different methods, including the radiation detector and computational method. This thesis uses the computational method. With the program MCNP5 was determined the speed of the radiation exposure in the radiotherapy room of Cancer Center of ABC Hospital in Mexico City. In the application of computational method, first the thicknesses of the shields were calculated, using variables as: 1) distance from the shield to the source; 2) desired weekly equivalent dose; 3) weekly total dose equivalent emitted by the equipment; 4) occupation and use factors. Once obtained thicknesses, we proceeded to model the bunker using the mentioned program. The program uses the Monte Carlo code to probabilistic ally determine the phenomena of interaction of radiation with the shield, which will be held during the X-ray emission from the linear accelerator. The results of computational analysis were compared with those obtained experimentally with the detection method, for which was required the use of a Geiger-Muller counter and the linear accelerator was programmed with an energy of 19 MV with 500 units monitor positioning the detector in the corresponding boundary. (Author)
EVOLUTION OF THE BINARY FRACTION IN DENSE STELLAR SYSTEMS
International Nuclear Information System (INIS)
Fregeau, John M.; Ivanova, Natalia; Rasio, Frederic A.
2009-01-01
Using our recently improved Monte Carlo evolution code, we study the evolution of the binary fraction in globular clusters. In agreement with previous N-body simulations, we find generally that the hard binary fraction in the core tends to increase with time over a range of initial cluster central densities for initial binary fractions ∼<90%. The dominant processes driving the evolution of the core binary fraction are mass segregation of binaries into the cluster core and preferential destruction of binaries there. On a global scale, these effects and the preferential tidal stripping of single stars tend to roughly balance, leading to overall cluster binary fractions that are roughly constant with time. Our findings suggest that the current hard binary fraction near the half-mass radius is a good indicator of the hard primordial binary fraction. However, the relationship between the true binary fraction and the fraction of main-sequence stars in binaries (which is typically what observers measure) is nonlinear and rather complicated. We also consider the importance of soft binaries, which not only modify the evolution of the binary fraction, but can also drastically change the evolution of the cluster as a whole. Finally, we briefly describe the recent addition of single and binary stellar evolution to our cluster evolution code.
Energy Technology Data Exchange (ETDEWEB)
Kirtman, Bernard [Department of Chemistry and Biochemistry, University of California, Santa Barbara, California 93106 (United States); Springborg, Michael [Physical and Theoretical Chemistry, University of Saarland, 66123 Saarbrücken (Germany); Rérat, Michel [Equipe de Chimie Physique, IPREM UMR5254, Université de Pau et des Pays de l' Adour, 64000 Pau (France); Ferrero, Mauro; Lacivita, Valentina; Dovesi, Roberto [Departimeno di Chimica, IFM, Università di Torino and NIS - Nanostructure Interfaces and Surfaces - Centre of Excellence, Via P. Giuria 7, 10125 Torino (Italy); Orlando, Roberto [Departimento di Scienze e Tecnologie Avanzati, Università del Piemonte Orientale, Viale T. Michel 11, 15121 Alessandria (Italy)
2015-01-22
An implementation of the vector potential approach (VPA) for treating the response of infinite periodic systems to static and dynamic electric fields has been initiated within the CRYSTAL code. The VPA method is based on the solution of a time-dependent Hartree-Fock or Kohn-Sham equation for the crystal orbitals wherein the usual scalar potential, that describes interaction with the field, is replaced by the vector potential. This equation may be solved either by perturbation theory or by finite field methods. With some modification all the computational procedures of molecular ab initio quantum chemistry can be adapted for periodic systems. Accessible properties include the linear and nonlinear responses of both the nuclei and the electrons. The programming of static field pure electronic (hyper)polarizabilities has been successfully tested. Dynamic electronic (hyper)polarizabilities, as well as infrared and Raman intensities, are in progress while the addition of finite fields for calculation of vibrational (hyper)polarizabilities, through nuclear relaxation procedures, will begin shortly.
Sommariva, C.; Nardon, E.; Beyer, P.; Hoelzl, M.; Huijsmans, G. T. A.; van Vugt, D.; Contributors, JET
2018-01-01
In order to contribute to the understanding of runaway electron generation mechanisms during tokamak disruptions, a test particle tracker is introduced in the JOREK 3D non-linear MHD code, able to compute both full and guiding center relativistic orbits. Tests of the module show good conservation of the invariants of motion and consistency between full orbit and guiding center solutions. A first application is presented where test electron confinement properties are investigated in a massive gas injection-triggered disruption simulation in JET-like geometry. It is found that electron populations initialised before the thermal quench (TQ) are typically not fully deconfined in spite of the global stochasticity of the magnetic field during the TQ. The fraction of ‘survivors’ decreases from a few tens down to a few tenths of percent as the electron energy varies from 1 keV to 10 MeV. The underlying mechanism for electron ‘survival’ is the prompt reformation of closed magnetic surfaces at the plasma core and, to a smaller extent, the subsequent reappearance of a magnetic surface at the edge. It is also found that electrons are less deconfined at 10 MeV than at 1 MeV, which appears consistent with a phase averaging effect due to orbit shifts at high energy.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
Smith, L.M.; Hochstedler, R.D.
1997-01-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code)
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
Smith, L. M.; Hochstedler, R. D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
Binary translation using peephole translation rules
Bansal, Sorav; Aiken, Alex
2010-05-04
An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.
Cellular Automata Rules and Linear Numbers
Nayak, Birendra Kumar; Sahoo, Sudhakar; Biswal, Sagarika
2012-01-01
In this paper, linear Cellular Automta (CA) rules are recursively generated using a binary tree rooted at "0". Some mathematical results on linear as well as non-linear CA rules are derived. Integers associated with linear CA rules are defined as linear numbers and the properties of these linear numbers are studied.
Essential idempotents and simplex codes
Directory of Open Access Journals (Sweden)
Gladys Chalom
2017-01-01
Full Text Available We define essential idempotents in group algebras and use them to prove that every mininmal abelian non-cyclic code is a repetition code. Also we use them to prove that every minimal abelian code is equivalent to a minimal cyclic code of the same length. Finally, we show that a binary cyclic code is simplex if and only if is of length of the form $n=2^k-1$ and is generated by an essential idempotent.
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
2012-01-01
This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....
Indian Academy of Sciences (India)
Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.
A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions
Directory of Open Access Journals (Sweden)
Marco Baldi
2008-01-01
Full Text Available Classic linear block codes, like Bose-Chaudhuri-Hocquenghem (BCH and Reed-Solomon (RS codes, are widely used in multimedia transmissions, but their soft-decision decoding still represents an open issue. Among the several approaches proposed for this purpose, an important role is played by the iterative belief propagation principle, whose application to low-density parity-check (LDPC codes permits to approach the channel capacity. In this paper, we elaborate a new technique for decoding classic binary and nonbinary codes through the belief propagation algorithm. We focus on RS codes included in the recent CDMA2000 standard, and compare the proposed technique with the adaptive belief propagation approach, that is able to ensure very good performance but with higher complexity. Moreover, we consider the case of long BCH codes included in the DVB-S2 standard, for which we show that the usage of “pure” LDPC codes would provide better performance.
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
Soft decoding a self-dual (48, 24; 12) code
Solomon, G.
1993-01-01
A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.
Logistic chaotic maps for binary numbers generations
International Nuclear Information System (INIS)
Kanso, Ali; Smaoui, Nejib
2009-01-01
Two pseudorandom binary sequence generators, based on logistic chaotic maps intended for stream cipher applications, are proposed. The first is based on a single one-dimensional logistic map which exhibits random, noise-like properties at given certain parameter values, and the second is based on a combination of two logistic maps. The encryption step proposed in both algorithms consists of a simple bitwise XOR operation of the plaintext binary sequence with the keystream binary sequence to produce the ciphertext binary sequence. A threshold function is applied to convert the floating-point iterates into binary form. Experimental results show that the produced sequences possess high linear complexity and very good statistical properties. The systems are put forward for security evaluation by the cryptographic committees.
Energy Technology Data Exchange (ETDEWEB)
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Bondi-Hoyle-Lyttleton Accretion onto Binaries
Antoni, Andrea; MacLeod, Morgan; Ramírez-Ruiz, Enrico
2018-01-01
Binary stars are not rare. While only close binary stars will eventually interact with one another, even the widest binary systems interact with their gaseous surroundings. The rates of accretion and the gaseous drag forces arising in these interactions are the key to understanding how these systems evolve. This poster examines accretion flows around a binary system moving supersonically through a background gas. We perform three-dimensional hydrodynamic simulations of Bondi-Hoyle-Lyttleton accretion using the adaptive mesh refinement code FLASH. We simulate a range of values of semi-major axis of the orbit relative to the gravitational focusing impact parameter of the pair. On large scales, gas is gravitationally focused by the center-of-mass of the binary, leading to dynamical friction drag and to the accretion of mass and momentum. On smaller scales, the orbital motion imprints itself on the gas. Notably, the magnitude and direction of the forces acting on the binary inherit this orbital dependence. The long-term evolution of the binary is determined by the timescales for accretion, slow down of the center-of-mass, and decay of the orbit. We use our simulations to measure these timescales and to establish a hierarchy between them. In general, our simulations indicate that binaries moving through gaseous media will slow down before the orbit decays.
Optimizing the ATLAS code with different profilers
Kama, S; The ATLAS collaboration
2013-01-01
After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 4M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like PIN, PAPI, and GOODA; as well as techniques such as library interposing. In this talk we will mainly focus on PIN tools and GOODA. PIN is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance...
On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound
Li, Ruihu; Li, Xueliang; Guo, Luobin
2015-12-01
The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Indian Academy of Sciences (India)
having a probability Pi of being equal to a 1. Let us assume ... equal to a 0/1 has no bearing on the probability of the. It is often ... bits (call this set S) whose individual bits add up to zero ... In the context of binary error-correct~ng codes, specifi-.
Utomo, P.H.; Makarim, R.H.
2017-01-01
A Binary puzzle is a Sudoku-like puzzle with values in each cell taken from the set {0,1} {0,1}. Let n≥4 be an even integer, a solved binary puzzle is an n×n binary array that satisfies the following conditions: (1) no three consecutive ones and no three consecutive zeros in each row and each
Eclipsing binaries in open clusters
DEFF Research Database (Denmark)
Southworth, John; Clausen, J.V.
2006-01-01
Stars: fundamental parameters - Stars : binaries : eclipsing - Stars: Binaries: spectroscopic - Open clusters and ass. : general Udgivelsesdato: 5 August......Stars: fundamental parameters - Stars : binaries : eclipsing - Stars: Binaries: spectroscopic - Open clusters and ass. : general Udgivelsesdato: 5 August...
Binary Biometric Representation through Pairwise Adaptive Phase Quantization
Chen, C.; Veldhuis, Raymond N.J.
Extracting binary strings from real-valued biometric templates is a fundamental step in template compression and protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Quantization and coding is the straightforward way to extract binary representations
DSIbin : Identifying dynamic data structures in C/C++ binaries
Rupprecht, Thomas; Chen, Xi; White, David H.; Boockmann, Jan H.; Luttgen, Gerald; Bos, Herbert
2017-01-01
Reverse engineering binary code is notoriously difficult and, especially, understanding a binary's dynamic data structures. Existing data structure analyzers are limited wrt. program comprehension: they do not detect complex structures such as skip lists, or lists running through nodes of different
Huffman coding in advanced audio coding standard
Brzuchalski, Grzegorz
2012-05-01
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
Elder, D
1984-06-07
The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
Compact binary hashing for music retrieval
Seo, Jin S.
2014-03-01
With the huge volume of music clips available for protection, browsing, and indexing, there is an increased attention to retrieve the information contents of the music archives. Music-similarity computation is an essential building block for browsing, retrieval, and indexing of digital music archives. In practice, as the number of songs available for searching and indexing is increased, so the storage cost in retrieval systems is becoming a serious problem. This paper deals with the storage problem by extending the supervector concept with the binary hashing. We utilize the similarity-preserving binary embedding in generating a hash code from the supervector of each music clip. Especially we compare the performance of the various binary hashing methods for music retrieval tasks on the widely-used genre dataset and the in-house singer dataset. Through the evaluation, we find an effective way of generating hash codes for music similarity estimation which improves the retrieval performance.
On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks
DEFF Research Database (Denmark)
Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk
2013-01-01
quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...
Further Generalisations of Twisted Gabidulin Codes
DEFF Research Database (Denmark)
Puchinger, Sven; Rosenkilde, Johan Sebastian Heesemann; Sheekey, John
2017-01-01
We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes.......We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes....
Binary recursive partitioning: background, methods, and application to psychology.
Merkle, Edgar C; Shaffer, Victoria A
2011-02-01
Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.
Binary palmprint representation for feature template protection
Mu, Meiru; Ruan, Qiuqi; Shao, X.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
2012-01-01
The major challenge of biometric template protection comes from the intraclass variations of biometric data. The helper data scheme aims to solve this problem by employing the Error Correction Codes (ECC). However, many reported biometric binary features from the same user reach bit error rate (BER)
ALPS - A LINEAR PROGRAM SOLVER
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Minidisks in Binary Black Hole Accretion
Energy Technology Data Exchange (ETDEWEB)
Ryan, Geoffrey; MacFadyen, Andrew, E-mail: gsr257@nyu.edu [Center for Cosmology and Particle Physics, Physics Department, New York University, New York, NY 10003 (United States)
2017-02-01
Newtonian simulations have demonstrated that accretion onto binary black holes produces accretion disks around each black hole (“minidisks”), fed by gas streams flowing through the circumbinary cavity from the surrounding circumbinary disk. We study the dynamics and radiation of an individual black hole minidisk using 2D hydrodynamical simulations performed with a new general relativistic version of the moving-mesh code Disco. We introduce a comoving energy variable that enables highly accurate integration of these high Mach number flows. Tidally induced spiral shock waves are excited in the disk and propagate through the innermost stable circular orbit, providing a Reynolds stress that causes efficient accretion by purely hydrodynamic means and producing a radiative signature brighter in hard X-rays than the Novikov–Thorne model. Disk cooling is provided by a local blackbody prescription that allows the disk to evolve self-consistently to a temperature profile where hydrodynamic heating is balanced by radiative cooling. We find that the spiral shock structure is in agreement with the relativistic dispersion relation for tightly wound linear waves. We measure the shock-induced dissipation and find outward angular momentum transport corresponding to an effective alpha parameter of order 0.01. We perform ray-tracing image calculations from the simulations to produce theoretical minidisk spectra and viewing-angle-dependent images for comparison with observations.
Directory of Open Access Journals (Sweden)
Christopher M. Bentz
2014-03-01
Full Text Available We compare optical time domain reflectometry (OTDR techniques based on conventional single impulse, coding and linear frequency chirps concerning their signal to noise ratio (SNR enhancements by measurements in a passive optical network (PON with a maximum one-way attenuation of 36.6 dB. A total of six subscribers, each represented by a unique mirror pair with narrow reflection bandwidths, are installed within a distance of 14 m. The spatial resolution of the OTDR set-up is 3.0 m.
International Nuclear Information System (INIS)
Espinosa P, G.; Estrada P, C.E.; Nunez C, A.; Amador G, R.
2001-01-01
The computer code ANESLI-1 developed by the CNSNS and UAM-I, has the main goal of making stability analysis of nuclear reactors of the BWR type, more specifically, the reactors of the U1 and U2 of the CNLV. However it can be used for another kind of applications. Its capacity of real time simulator, allows the prediction of operational transients, and conditions of dynamic steady states. ANESLI-1 was developed under a modular scheme, which allows to extend or/and to improve its scope. The lineal stability analysis predicts the instabilities produced by the wave density phenomenon. (Author)
Binary Arithmetic From Hariot (CA, 1600 A.D.) to the Computer Age.
Glaser, Anton
This history of binary arithmetic begins with details of Thomas Hariot's contribution and includes specific references to Hariot's manuscripts kept at the British Museum. A binary code developed by Sir Francis Bacon is discussed. Briefly mentioned are contributions to binary arithmetic made by Leibniz, Fontenelle, Gauss, Euler, Benzout, Barlow,…
Rate-adaptive BCH codes for distributed source coding
DEFF Research Database (Denmark)
Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren
2013-01-01
This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...
Vectorial Resilient PC(l) of Order k Boolean Functions from AG-Codes
Institute of Scientific and Technical Information of China (English)
Hao CHEN; Liang MA; Jianhua LI
2011-01-01
Propagation criteria and resiliency of vectorial Boolean functions are important for cryptographic purpose (see [1- 4, 7, 8, 10, 11, 16]). Kurosawa, Stoh [8] and Carlet [1]gave a construction of Boolean functions satisfying PC(l) of order k from binary linear or nonlinear codes. In this paper, the algebraic-geometric codes over GF(2m) are used to modify the Carlet and Kurosawa-Satoh's construction for giving vectorial resilient Boolean functions satisfying PC(l) of order k criterion. This new construction is compared with previously known results.
Protograph LDPC Codes Over Burst Erasure Channels
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
Directory of Open Access Journals (Sweden)
A.K. Bhunia
2013-04-01
Full Text Available This paper deals with a deterministic inventory model developed for deteriorating items having two separate storage facilities (owned and rented warehouses due to limited capacity of the existing storage (owned warehouse with linear time dependent demand (increasing over a fixed finite time horizon. The model is formulated with infinite replenishment and the successive replenishment cycle lengths are in arithmetic progression. Partially backlogged shortages are allowed. The stocks of rented warehouse (RW are transported to the owned warehouse (OW in continuous release pattern. For this purpose, the model is formulated as a constrained non-linear mixed integer programming problem. For solving the problem, an advanced genetic algorithm (GA has been developed. This advanced GA is based on ranking selection, elitism, whole arithmetic crossover and non-uniform mutation dependent on the age of the population. Our objective is to determine the optimal replenishment number, lot-size of two-warehouses (OW and RW by maximizing the profit function. The model is illustrated with four numerical examples and sensitivity analyses of the optimal solution are performed with respect to different parameters.
International Nuclear Information System (INIS)
Sumner, H.M.
1969-03-01
The KDF9/EGDON program ZIP MK 2 is the third of a series of programs for off-line digital computer analysis of dynamic systems: it has been designed specifically to cater for the needs of the design or control engineer in having an input scheme which is minimally computer-oriented. It uses numerical algorithms which are as near fool-proof as the author could discover or devise, and has comprehensive diagnostic sections to help the user in the event of faulty data or machine execution. ZIP MK 2 accepts mathematical models comprising first order linear differential and linear algebraic equations, and from these computes and factorises the transfer functions between specified pairs of output and input variables; if desired, the frequency response may be computed from the computed transfer function. The model input scheme is fully compatible with the frequency response programs FRP MK 1 and MK 2, except that, for ZIP MK 2, transport, or time-delays must be converted by the user to Pade or Bode approximations prior to input. ZIP provides the pole-zero plot, (or complex plane analysis), while FRP provides the frequency response and FIFI the time domain analyses. The pole-zero method of analysis has been little used in the past for complex models, especially where transport delays occur, and one of its primary purposes is as a research tool to investigate the usefulness of this method, for process plant, whether nuclear, chemical or other continuous processes. (author)
Binary Masking & Speech Intelligibility
DEFF Research Database (Denmark)
Boldt, Jesper
The purpose of this thesis is to examine how binary masking can be used to increase intelligibility in situations where hearing impaired listeners have difficulties understanding what is being said. The major part of the experiments carried out in this thesis can be categorized as either experime......The purpose of this thesis is to examine how binary masking can be used to increase intelligibility in situations where hearing impaired listeners have difficulties understanding what is being said. The major part of the experiments carried out in this thesis can be categorized as either...... experiments under ideal conditions or as experiments under more realistic conditions useful for real-life applications such as hearing aids. In the experiments under ideal conditions, the previously defined ideal binary mask is evaluated using hearing impaired listeners, and a novel binary mask -- the target...... binary mask -- is introduced. The target binary mask shows the same substantial increase in intelligibility as the ideal binary mask and is proposed as a new reference for binary masking. In the category of real-life applications, two new methods are proposed: a method for estimation of the ideal binary...
Sahade, Jorge; Ter Haar, D
1978-01-01
Interacting Binary Stars deals with the development, ideas, and problems in the study of interacting binary stars. The book consolidates the information that is scattered over many publications and papers and gives an account of important discoveries with relevant historical background. Chapters are devoted to the presentation and discussion of the different facets of the field, such as historical account of the development in the field of study of binary stars; the Roche equipotential surfaces; methods and techniques in space astronomy; and enumeration of binary star systems that are studied
Walker, Judy L
2000-01-01
When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...
Smoothed analysis of binary search trees
Manthey, Bodo; Reischuk, Rüdiger
2007-01-01
Binary search trees are one of the most fundamental data structures. While the height of such a tree may be linear in the worst case, the average height with respect to the uniform distribution is only logarithmic. The exact value is one of the best studied problems in average-case complexity. We
Solving and interpreting binary classification problems in marketing with SVMs
J.C. Bioch (Cor); P.J.F. Groenen (Patrick); G.I. Nalbantov (Georgi)
2005-01-01
textabstractMarketing problems often involve inary classification of customers into ``buyers'' versus ``non-buyers'' or ``prefers brand A'' versus ``prefers brand B''. These cases require binary classification models such as logistic regression, linear, and quadratic discriminant analysis. A
International Nuclear Information System (INIS)
Cardena R, A. R.; Vega R, J. L.; Apaza V, D. G.
2015-10-01
The progress in cancer treatment systems in heterogeneities of human body has had obstacles by the lack of a suitable experimental model test. The only option is to develop simulated theoretical models that have the same properties in interfaces similar to human tissues, to know the radiation behavior in the interaction with these materials. In this paper we used the Monte Carlo method by Penelope code based solely on studies for the cancer treatment as well as for the calibration of beams and their various interactions in mannequins. This paper also aims the construction, simulation and characterization of an equivalent object to the tissues of the human body with various heterogeneities, we will later use to control and plan experientially doses supplied in treating tumors in radiotherapy. To fulfill the objective we study the ionizing radiation and the various processes occurring in the interaction with matter; understanding that to calculate the dose deposited in tissues interfaces (percentage depth dose) must be taken into consideration aspects such as the deposited energy, irradiation fields, density, thickness, tissue sensitivity and other items. (Author)
Asteroseismic effects in close binary stars
Springer, Ofer M.; Shaviv, Nir J.
2013-09-01
Turbulent processes in the convective envelopes of the Sun and stars have been shown to be a source of internal acoustic excitations. In single stars, acoustic waves having frequencies below a certain cut-off frequency propagate nearly adiabatically and are effectively trapped below the photosphere where they are internally reflected. This reflection essentially occurs where the local wavelength becomes comparable to the pressure scale height. In close binary stars, the sound speed is a constant on equipotentials, while the pressure scale height, which depends on the local effective gravity, varies on equipotentials and may be much greater near the inner Lagrangian point (L1). As a result, waves reaching the vicinity of L1 may propagate unimpeded into low-density regions, where they tend to dissipate quickly due to non-linear and radiative effects. We study the three-dimensional propagation and enhanced damping of such waves inside a set of close binary stellar models using a WKB approximation of the acoustic field. We find that these waves can have much higher damping rates in close binaries, compared to their non-binary counterparts. We also find that the relative distribution of acoustic energy density at the visible surface of close binaries develops a ring-like feature at specific acoustic frequencies and binary separations.
Minimizing embedding impact in steganography using trellis-coded quantization
Filler, Tomáš; Judas, Jan; Fridrich, Jessica
2010-01-01
In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.
Energy Technology Data Exchange (ETDEWEB)
Toselli, G. [ENEA, Centro Ricerche Ezio Clementel, Bologna, (Italy). Dipt. Innovazione; Mirco, A. M. [Bologna Univ., Bologna (Italy). Dipt. di Matematica
1999-07-01
In this technical report the thesis of doctor's degree in Mathematics of A.M. Mirco is reported; it has been developed at ENEA research centre 'E. Clementel' in Bologna (Italy) in the frame of a collaboration between the section MACO (Applied Physics Division - Innovation Department) of ENEA at Bologna and the Department of Mathematics of the mathematical, physical and natural sciences faculty of Bologna University. Substantially, studies and research work, developed in these last years at MACO section, are here presented; they have led to the development of a constitutive model, based on Hill potential theory, for the treatment, in plastic field, of metal material anisotropy induced by previous workings and to the construction of the corresponding FEM algorithm for the non-linear structural analysis NOSA, oriented in particular to the numerical simulation of metal forming. Subsequently, an algorithm extension (proper object of the thesis), which has given, beyond a more rigorous formalization, also significant improvements. [Italian] In questo rapporto tecnico viene riportata la tesi di laurea in matematica di A. M. Mirco, tesi svolta presso il centro ricerche E. Clementel dell'ENEA di Bologna nell'ambito di un accordo di collaborazione fra la sezione MACO (Divisione Fisica Applicata - Dipartimento di Innovazione) dell'ENEA di Bologna ed il Dipartimento di matematica della facolta' di scienze matematiche, fisiche e naturali dell'universita' degli studi di Bologna. Sostanzialmente, vengono presentati gli studi ed il lavoro di ricerca, svolti in questi ultimi anni presso la sezione MACO, che hanno portato allo sviluppo di un modello costitutivo, basato sulla teoria del potenziale di Hill, per il trattamento in campo plastico, dell'anisotropia indotta da lavorazioni precedenti per un materiale metallico ed alla costruzione del corrispondente algoritmo basato sul metodo degli elementi finiti per il codice di analisi
Mining frequent binary expressions
Calders, T.; Paredaens, J.; Kambayashi, Y.; Mohania, M.K.; Tjoa, A.M.
2000-01-01
In data mining, searching for frequent patterns is a common basic operation. It forms the basis of many interesting decision support processes. In this paper we present a new type of patterns, binary expressions. Based on the properties of a specified binary test, such as reflexivity, transitivity
DEFF Research Database (Denmark)
Pedersen, Morten Videbæk; Heide, Janus; Vingelmann, Peter
2013-01-01
Creating efficient finite field implementations has been an active research topic for several decades. Many appli- cations in areas such as cryptography, signal processing, erasure coding and now also network coding depend on this research to deliver satisfactory performance. In this paper we...... from a benchmark application written in C++. These results are finally compared to different binary and binary extension field implementations. The results show that the prime field implementation offers a large field size while maintaining a very good performance. We believe that using prime fields...
DEFF Research Database (Denmark)
Nielsen, Rasmus Refslund
2002-01-01
This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....
Energy Technology Data Exchange (ETDEWEB)
Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)
1966-09-01
This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)
COSMIC probes into compact binary formation and evolution
Breivik, Katelyn
2018-01-01
The population of compact binaries in the galaxy represents the final state of all binaries that have lived up to the present epoch. Compact binaries present a unique opportunity to probe binary evolution since many of the interactions binaries experience can be imprinted on the compact binary population. By combining binary evolution simulations with catalogs of observable compact binary systems, we can distill the dominant physical processes that govern binary star evolution, as well as predict the abundance and variety of their end products.The next decades herald a previously unseen opportunity to study compact binaries. Multi-messenger observations from telescopes across all wavelengths and gravitational-wave observatories spanning several decades of frequency will give an unprecedented view into the structure of these systems and the composition of their components. Observations will not always be coincident and in some cases may be separated by several years, providing an avenue for simulations to better constrain binary evolution models in preparation for future observations.I will present the results of three population synthesis studies of compact binary populations carried out with the Compact Object Synthesis and Monte Carlo Investigation Code (COSMIC). I will first show how binary-black-hole formation channels can be understood with LISA observations. I will then show how the population of double white dwarfs observed with LISA and Gaia could provide a detailed view of mass transfer and accretion. Finally, I will show that Gaia could discover thousands black holes in the Milky Way through astrometric observations, yielding view into black-hole astrophysics that is complementary to and independent from both X-ray and gravitational-wave astronomy.
EXIT Chart Analysis of Binary Message-Passing Decoders
DEFF Research Database (Denmark)
Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard
2007-01-01
Binary message-passing decoders for LDPC codes are analyzed using EXIT charts. For the analysis, the variable node decoder performs all computations in the L-value domain. For the special case of a hard decision channel, this leads to the well know Gallager B algorithm, while the analysis can...... be extended to channels with larger output alphabets. By increasing the output alphabet from hard decisions to four symbols, a gain of more than 1.0 dB is achieved using optimized codes. For this code optimization, the mixing property of EXIT functions has to be modified to the case of binary message......-passing decoders....
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Algebraic and stochastic coding theory
Kythe, Dave K
2012-01-01
Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.
Hiding a Covert Digital Image by Assembling the RSA Encryption Method and the Binary Encoding Method
Directory of Open Access Journals (Sweden)
Kuang Tsan Lin
2014-01-01
Full Text Available The Rivest-Shamir-Adleman (RSA encryption method and the binary encoding method are assembled to form a hybrid hiding method to hide a covert digital image into a dot-matrix holographic image. First, the RSA encryption method is used to transform the covert image to form a RSA encryption data string. Then, all the elements of the RSA encryption data string are transferred into binary data. Finally, the binary data are encoded into the dot-matrix holographic image. The pixels of the dot-matrix holographic image contain seven groups of codes used for reconstructing the covert image. The seven groups of codes are identification codes, covert-image dimension codes, covert-image graylevel codes, pre-RSA bit number codes, RSA key codes, post-RSA bit number codes, and information codes. The reconstructed covert image derived from the dot-matrix holographic image and the original covert image are exactly the same.
International Nuclear Information System (INIS)
Larsson-Leander, G.
1979-01-01
Studies of close binary stars are being persued more vigorously than ever, with about 3000 research papers and notes pertaining to the field being published during the triennium 1976-1978. Many major advances and spectacular discoveries were made, mostly due to increased observational efficiency and precision, especially in the X-ray, radio, and ultraviolet domains. Progress reports are presented in the following areas: observational techniques, methods of analyzing light curves, observational data, physical data, structure and models of close binaries, statistical investigations, and origin and evolution of close binaries. Reports from the Coordinates Programs Committee, the Committee for Extra-Terrestrial Observations and the Working Group on RS CVn binaries are included. (Auth./C.F.)
Rotation invariant deep binary hashing for fast image retrieval
Dai, Lai; Liu, Jianming; Jiang, Aiwen
2017-07-01
In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.
International Nuclear Information System (INIS)
Petrov, D.A.
1986-01-01
Conditions for thermodynamical equilibrium in binary and ternary systems are considered. Main types of binary and ternary system phase diagrams are sequently constructed on the basis of general regularities on the character of transition from one equilibria to others. New statements on equilibrium line direction in the diagram triple points and their isothermal cross sections are developed. New represenations on equilibria in case of monovariant curve minimum and maximum on three-phase equilibrium formation in ternary system are introduced
Thebault, Ph.; Haghighipour, N.
2014-01-01
Spurred by the discovery of numerous exoplanets in multiple systems, binaries have become in recent years one of the main topics in planet formation research. Numerous studies have investigated to what extent the presence of a stellar companion can affect the planet formation process. Such studies have implications that can reach beyond the sole context of binaries, as they allow to test certain aspects of the planet formation scenario by submitting them to extreme environments. We review her...
Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification
Directory of Open Access Journals (Sweden)
Yidong Tang
2016-01-01
Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.
International Nuclear Information System (INIS)
Delbecq, J.M.
1999-01-01
The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)
Contamination of RR Lyrae stars from Binary Evolution Pulsators
Karczmarek, Paulina; Pietrzyński, Grzegorz; Belczyński, Krzysztof; Stępień, Kazimierz; Wiktorowicz, Grzegorz; Iłkiewicz, Krystian
2016-06-01
Binary Evolution Pulsator (BEP) is an extremely low-mass member of a binary system, which pulsates as a result of a former mass transfer to its companion. BEP mimics RR Lyrae-type pulsations but has different internal structure and evolution history. We present possible evolution channels to produce BEPs, and evaluate the contamination value, i.e. how many objects classified as RR Lyrae stars can be undetected BEPs. In this analysis we use population synthesis code StarTrack.
Pablos Martin, X; Deltenre, P; Hoonhorst, I; Markessis, E; Rossion, B; Colin, C
2007-12-01
Rhythm perception appears to be non-linear as human subjects are better at discriminating, categorizing and reproducing rhythms containing binary vs non-binary (e.a. 1:2 vs 1:3) as well as metrical vs non-metrical (e.a. 1:2 vs 1:2.5) interval ratios. This study examined the representation of binary and non-binary interval ratios within the sensory memory, thus yielding a truly sensory, pre-motor, attention-independent neural representation of rhythmical intervals. Five interval ratios, one binary, flanked by four non-binary ones, were compared on the basis of the MMN they evoked when contrasted against a common standard interval. For all five intervals, the larger the contrast was, the larger the MMN amplitude was. The binary interval evoked a significantly much shorter (by at least 23 ms) MMN latency than the other intervals, whereas no latency difference was observed between the four non-binary intervals. These results show that the privileged perceptual status of binary rhythmical intervals is already present in the sensory representations found in echoic memory at an early, automatic, pre-perceptual and pre-motor level. MMN latency can be used to study rhythm perception at a truly sensory level, without any contribution from the motor system.
Orbital motion in pre-main sequence binaries
Energy Technology Data Exchange (ETDEWEB)
Schaefer, G. H. [The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023 (United States); Prato, L. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Simon, M. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Patience, J., E-mail: schaefer@chara-array.org [Astrophysics Group, School of Physics, University of Exeter, Exeter, EX4 4QL (United Kingdom)
2014-06-01
We present results from our ongoing program to map the visual orbits of pre-main sequence (PMS) binaries in the Taurus star forming region using adaptive optics imaging at the Keck Observatory. We combine our results with measurements reported in the literature to analyze the orbital motion for each binary. We present preliminary orbits for DF Tau, T Tau S, ZZ Tau, and the Pleiades binary HBC 351. Seven additional binaries show curvature in their relative motion. Currently, we can place lower limits on the orbital periods for these systems; full solutions will be possible with more orbital coverage. Five other binaries show motion that is indistinguishable from linear motion. We suspect that these systems are bound and might show curvature with additional measurements in the future. The observations reported herein lay critical groundwork toward the goal of measuring precise masses for low-mass PMS stars.
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Moruz, Gabriel
2006-01-01
It is well-known that to minimize the number of comparisons a binary search tree should be perfectly balanced. Previous work has shown that a dominating factor over the running time for a search is the number of cache faults performed, and that an appropriate memory layout of a binary search tree...... can reduce the number of cache faults by several hundred percent. Motivated by the fact that during a search branching to the left or right at a node does not necessarily have the same cost, e.g. because of branch prediction schemes, we in this paper study the class of skewed binary search trees....... For all nodes in a skewed binary search tree the ratio between the size of the left subtree and the size of the tree is a fixed constant (a ratio of 1/2 gives perfect balanced trees). In this paper we present an experimental study of various memory layouts of static skewed binary search trees, where each...
Error-Detecting Identification Codes for Algebra Students.
Sutherland, David C.
1990-01-01
Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)
Context quantization by minimum adaptive code length
DEFF Research Database (Denmark)
Forchhammer, Søren; Wu, Xiaolin
2007-01-01
Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
International Nuclear Information System (INIS)
Tutukov, A.V.; Fedorova, A.V.; Yungel'son, L.R.
1982-01-01
The conditions of mass exchange in close binary systems with masses of components less or equal to one solar mass have been analysed for the case, when the system radiates gravitational waves. It has been shown that the mass exchange rate depends in a certain way on the mass ratio of components and on the mass of component that fills its inner critical lobe. The comparison of observed periods, masses of contact components, and mass exchange rates of observed cataclysmic binaries have led to the conclusion that the evolution of close binaries WZ Sge, OY Car, Z Cha, TT Ari, 2A 0311-227, and G 61-29 may be driven by the emission of gravitational waves [ru
Binary catalogue of exoplanets
Schwarz, Richard; Bazso, Akos; Zechner, Renate; Funk, Barbara
2016-02-01
Since 1995 there is a database which list most of the known exoplanets (The Extrasolar Planets Encyclopaedia at http://exoplanet.eu/). With the growing number of detected exoplanets in binary and multiple star systems it became more important to mark and to separate them into a new database, which is not available in the Extrasolar Planets Encyclopaedia. Therefore we established an online database (which can be found at: http://www.univie.ac.at/adg/schwarz/multiple.html) for all known exoplanets in binary star systems and in addition for multiple star systems, which will be updated regularly and linked to the Extrasolar Planets Encyclopaedia. The binary catalogue of exoplanets is available online as data file and can be used for statistical purposes. Our database is divided into two parts: the data of the stars and the planets, given in a separate list. We describe also the different parameters of the exoplanetary systems and present some applications.
Binary and Millisecond Pulsars.
Lorimer, Duncan R
2008-01-01
We review the main properties, demographics and applications of binary and millisecond radio pulsars. Our knowledge of these exciting objects has greatly increased in recent years, mainly due to successful surveys which have brought the known pulsar population to over 1800. There are now 83 binary and millisecond pulsars associated with the disk of our Galaxy, and a further 140 pulsars in 26 of the Galactic globular clusters. Recent highlights include the discovery of the young relativistic binary system PSR J1906+0746, a rejuvination in globular cluster pulsar research including growing numbers of pulsars with masses in excess of 1.5 M ⊙ , a precise measurement of relativistic spin precession in the double pulsar system and a Galactic millisecond pulsar in an eccentric ( e = 0.44) orbit around an unevolved companion. Supplementary material is available for this article at 10.12942/lrr-2008-8.
Binary and Millisecond Pulsars
Directory of Open Access Journals (Sweden)
Lorimer Duncan R.
2008-11-01
Full Text Available We review the main properties, demographics and applications of binary and millisecond radio pulsars. Our knowledge of these exciting objects has greatly increased in recent years, mainly due to successful surveys which have brought the known pulsar population to over 1800. There are now 83 binary and millisecond pulsars associated with the disk of our Galaxy, and a further 140 pulsars in 26 of the Galactic globular clusters. Recent highlights include the discovery of the young relativistic binary system PSR J1906+0746, a rejuvination in globular cluster pulsar research including growing numbers of pulsars with masses in excess of 1.5M_⊙, a precise measurement of relativistic spin precession in the double pulsar system and a Galactic millisecond pulsar in an eccentric (e = 0.44 orbit around an unevolved companion.
International Nuclear Information System (INIS)
Aboufirassi, M; Angelique, J.C.; Bizard, G.; Bougault, R.; Brou, R.; Buta, A.; Colin, J.; Cussol, D.; Durand, D.; Genoux-Lubain, A.; Horn, D.; Kerambrun, A.; Laville, J.L.; Le Brun, C.; Lecolley, J.F.; Lefebvres, F.; Lopez, O.; Louvel, M.; Meslin, C.; Metivier, V.; Nakagawa, T.; Peter, J.; Popescu, R.; Regimbart, R.; Steckmeyer, J.C.; Tamain, B.; Vient, E.; Wieloch, A.; Yuasa-Nakagawa, K.
1998-01-01
The binary character of the heavy ion collisions at intermediate energies in the exit channel has been observed under 30 MeV/n in medium and heavy systems. Measurements in light systems at energies approaching ∼ 100 MeV/nucleon as well as in very heavy systems have allowed to extend considerably the investigations of this binary process. Thus, the study of the Pb + Au system showed that the complete charge events indicated two distinct sources: the quasi-projectile and the quasi-target. The characteristics of these two sources are rather well reproduced by a trajectory computation which takes into account the Coulomb and nuclear forces and the friction appearing from the projectile-target interaction. The Wilczynski diagram is used to probe the correlation between the kinetic energy quenching and the deflecting angle. In case of the system Pb + Au at 29 MeV/nucleon the diagram indicate dissipative binary collisions typical for low energies. This binary aspect was also detected in the systems Xe + Ag at 44 MeV/nucleon, 36 Ar + 27 Al and 64 Zn + nat Ti. Thus, it was possible to reconstruct the quasi-projectile and to study its mass and excitation energy evolution as a function of the impact parameter. The dissipative binary collisions represent for the systems and energies under considerations the main contribution to the cross section. This does not implies that there are not other processes; particularly, the more or less complete fusion is also observed but with a low cross section which decreases with the increase of bombardment energy. More exclusive measurements with the INDRA detector on quasi-symmetric systems as Ar + KCl and Xe + Sn seem to confirm the importance of the binary collisions. The two source reconstruction of the Xe + Sn data at 50 MeV/nucleon reproduces the same behaviour as that observed in the system Pb + Au at 29 MeV/nucleon
Binary and Millisecond Pulsars
Directory of Open Access Journals (Sweden)
Lorimer Duncan R.
2005-11-01
Full Text Available We review the main properties, demographics and applications of binary and millisecond radio pulsars. Our knowledge of these exciting objects has greatly increased in recent years, mainly due to successful surveys which have brought the known pulsar population to over 1700. There are now 80 binary and millisecond pulsars associated with the disk of our Galaxy, and a further 103 pulsars in 24 of the Galactic globular clusters. Recent highlights have been the discovery of the first ever double pulsar system and a recent flurry of discoveries in globular clusters, in particular Terzan 5.
Energy transfer in contact binary systems
International Nuclear Information System (INIS)
Robertson, J.A.
1980-01-01
A simple model for the transfer of energy by steady circulation within the envelope of a contact binary system is presented. The model describes the fully compressible, two-dimensional flow of a perfect gas within a rectangular region in a uniform gravitational field. The region is heated non-uniformly from below. Coriolis forces are neglected but the interaction of the circulation with convection is discussed briefly. Numerical solutions of the linearized equations of the problem are discussed in detail, and the results of some non-linear calculations are also presented. The influence of alternative boundary conditions is examined. (author)
Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding
Directory of Open Access Journals (Sweden)
Ying Chen
2014-05-01
Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.
Equational binary decision diagrams
J.F. Groote (Jan Friso); J.C. van de Pol (Jaco)
2000-01-01
textabstractWe incorporate equations in binary decision diagrams (BDD). The resulting objects are called EQ-BDDs. A straightforward notion of ordered EQ-BDDs (EQ-OBDD) is defined, and it is proved that each EQ-BDD is logically equivalent to an EQ-OBDD. Moreover, on EQ-OBDDs satisfiability and
Broekhuis, H.; Verkuyl, H.J
2014-01-01
The present paper adopts as its point of departure the claim by Te Winkel (1866) and Verkuyl (2008) that mental temporal representations are built on the basis of three binary oppositions: Present/Past, Synchronous/Posterior and Imperfect/Perfect. Te Winkel took the second opposition in terms of the
Tcheng, Ping
1989-01-01
Binary resistors in series tailored to precise value of resistance. Desired value of resistance obtained by cutting appropriate traces across resistors. Multibit, binary-based, adjustable resistor with high resolution used in many applications where precise resistance required.
Design of convolutional tornado code
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
The True Ultracool Binary Fraction Using Spectral Binaries
Bardalez Gagliuffi, Daniella; Burgasser, Adam J.; Schmidt, Sarah J.; Gagné, Jonathan; Faherty, Jacqueline K.; Cruz, Kelle; Gelino, Chris
2018-01-01
Brown dwarfs bridge the gap between stars and giant planets. While the essential mechanisms governing their formation are not well constrained, binary statistics are a direct outcome of the formation process, and thus provide a means to test formation theories. Observational constraints on the brown dwarf binary fraction place it at 10 ‑ 20%, dominated by imaging studies (85% of systems) with the most common separation at 4 AU. This coincides with the resolution limit of state-of-the-art imaging techniques, suggesting that the binary fraction is underestimated. We have developed a separation-independent method to identify and characterize tightly-separated (dwarfs as spectral binaries by identifying traces of methane in the spectra of late-M and early-L dwarfs. Imaging follow-up of 17 spectral binaries yielded 3 (18%) resolved systems, corroborating the observed binary fraction, but 5 (29%) known binaries were missed, reinforcing the hypothesis that the short-separation systems are undercounted. In order to find the true binary fraction of brown dwarfs, we have compiled a volume-limited, spectroscopic sample of M7-L5 dwarfs and searched for T dwarf companions. In the 25 pc volume, 4 candidates were found, three of which are already confirmed, leading to a spectral binary fraction of 0.95 ± 0.50%, albeit for a specific combination of spectral types. To extract the true binary fraction and determine the biases of the spectral binary method, we have produced a binary population simulation based on different assumptions of the mass function, age distribution, evolutionary models and mass ratio distribution. Applying the correction fraction resulting from this method to the observed spectral binary fraction yields a true binary fraction of 27 ± 4%, which is roughly within 1σ of the binary fraction obtained from high resolution imaging studies, radial velocity and astrometric monitoring. This method can be extended to identify giant planet companions to young brown
Face Alignment via Regressing Local Binary Features.
Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian
2016-03-01
This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.
Memory Vulnerability Diagnosis for Binary Program
Directory of Open Access Journals (Sweden)
Tang Feng-Yi
2016-01-01
Full Text Available Vulnerability diagnosis is important for program security analysis. It is a further step to understand the vulnerability after it is detected, as well as a preparatory step for vulnerability repair or exploitation. This paper mainly analyses the inner theories of major memory vulnerabilities and the threats of them. And then suggests some methods to diagnose several types of memory vulnerabilities for the binary programs, which is a difficult task due to the lack of source code. The diagnosis methods target at buffer overflow, use after free (UAF and format string vulnerabilities. We carried out some tests on the Linux platform to validate the effectiveness of the diagnosis methods. It is proved that the methods can judge the type of the vulnerability given a binary program.
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Wijers, R.A.M.J.
1996-01-01
Introduction Distinguishing neutron stars and black holes Optical companions and dynamical masses X-ray signatures of the nature of a compact object Structure and evolution of black-hole binaries High-mass black-hole binaries Low-mass black-hole binaries Low-mass black holes Formation of black holes
New quantum codes constructed from quaternary BCH codes
Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena
2016-10-01
In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.
Learning to assign binary weights to binary descriptor
Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun
2016-10-01
Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method.
International Nuclear Information System (INIS)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
Specialized Binary Analysis for Vetting Android APPS Using GUI Logic
2016-04-01
user expectation of what the app is doing. These techniques enable security analysts to quickly vet any given Android app even if the source code is...Malware Detection for Android applications, Binary Analysis 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 19a...of what the app is doing. These techniques enable security analysts to quickly vet any given Android app even if the source code is unavailable. These
The maximum number of minimal codewords in long codes
DEFF Research Database (Denmark)
Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.
2013-01-01
Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by...
Binary gabor statistical features for palmprint template protection
Mu, Meiru; Ruan, Qiuqi; Shao, X.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
2012-01-01
The biometric template protection system requires a highquality biometric channel and a well-designed error correction code (ECC). Due to the intra-class variations of biometric data, an efficient fixed-length binary feature extractor is required to provide a high-quality biometric channel so that
Introduction to coding and information theory
Roman, Steven
1997-01-01
This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.
Directory of Open Access Journals (Sweden)
Fabio Burderi
2007-05-01
Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.
Hou, H. S.
1985-07-01
An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.
Dynamic Binary Modification Tools, Techniques and Applications
Hazelwood, Kim
2011-01-01
Dynamic binary modification tools form a software layer between a running application and the underlying operating system, providing the powerful opportunity to inspect and potentially modify every user-level guest application instruction that executes. Toolkits built upon this technology have enabled computer architects to build powerful simulators and emulators for design-space exploration, compiler writers to analyze and debug the code generated by their compilers, software developers to fully explore the features, bottlenecks, and performance of their software, and even end-users to extend
Ripple Design of LT Codes for BIAWGN Channels
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
2014-01-01
This paper presents a novel framework, which enables a design of rateless codes for binary input additive white Gaussian noise (BIAWGN) channels, using the ripple-based approach known from the works for the binary erasure channel (BEC). We reveal that several aspects of the analytical results from...
International Nuclear Information System (INIS)
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
International Nuclear Information System (INIS)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Error-Rate Bounds for Coded PPM on a Poisson Channel
Moision, Bruce; Hamkins, Jon
2009-01-01
Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.
GridRun: A lightweight packaging and execution environment forcompact, multi-architecture binaries
Energy Technology Data Exchange (ETDEWEB)
Shalf, John; Goodale, Tom
2004-02-01
GridRun offers a very simple set of tools for creating and executing multi-platform binary executables. These ''fat-binaries'' archive native machine code into compact packages that are typically a fraction the size of the original binary images they store, enabling efficient staging of executables for heterogeneous parallel jobs. GridRun interoperates with existing distributed job launchers/managers like Condor and the Globus GRAM to greatly simplify the logic required launching native binary applications in distributed heterogeneous environments.
Optical three-step binary-logic-gate-based MSD arithmetic
Fyath, R. S.; Alsaffar, A. A. W.; Alam, M. S.
2003-11-01
A three-step modified signed-digit (MSD) adder is proposed which can be optically implmented using binary logic gates. The proposed scheme depends on encoding each MSD digits into a pair of binary digits using a two-state and multi-position based encoding scheme. The design algorithm depends on constructing the addition truth table of binary-coded MSD numbers and then using Karnaugh map to achieve output minimization. The functions associated with the optical binary logic gates are achieved by simply programming the decoding masks of an optical shadow-casting logic system.
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Massive Black Hole Binary Evolution
Directory of Open Access Journals (Sweden)
Merritt David
2005-11-01
Full Text Available Coalescence of binary supermassive black holes (SBHs would constitute the strongest sources of gravitational waves to be observed by LISA. While the formation of binary SBHs during galaxy mergers is almost inevitable, coalescence requires that the separation between binary components first drop by a few orders of magnitude, due presumably to interaction of the binary with stars and gas in a galactic nucleus. This article reviews the observational evidence for binary SBHs and discusses how they would evolve. No completely convincing case of a bound, binary SBH has yet been found, although a handful of systems (e.g. interacting galaxies; remnants of galaxy mergers are now believed to contain two SBHs at projected separations of <~ 1kpc. N-body studies of binary evolution in gas-free galaxies have reached large enough particle numbers to reproduce the slow, “diffusive” refilling of the binary’s loss cone that is believed to characterize binary evolution in real galactic nuclei. While some of the results of these simulations - e.g. the binary hardening rate and eccentricity evolution - are strongly N-dependent, others - e.g. the “damage” inflicted by the binary on the nucleus - are not. Luminous early-type galaxies often exhibit depleted cores with masses of ~ 1-2 times the mass of their nuclear SBHs, consistent with the predictions of the binary model. Studies of the interaction of massive binaries with gas are still in their infancy, although much progress is expected in the near future. Binary coalescence has a large influence on the spins of SBHs, even for mass ratios as extreme as 10:1, and evidence of spin-flips may have been observed.
Superlattice configurations in linear chain hydrocarbon binary mixtures
Indian Academy of Sciences (India)
monoclinic, monoclinic-monoclinic) are realizable, because of discrete orientational changes in the alignment of molecules of -C28H58 hydrocarbon, through an angle , where = 1, 2, 3 … and angle has an average value of 3.3°.
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
An, Shengli; Zhang, Yanhong; Chen, Zheng
2012-12-01
To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Binary rf pulse compression experiment at SLAC
International Nuclear Information System (INIS)
Lavine, T.L.; Spalek, G.; Farkas, Z.D.; Menegat, A.; Miller, R.H.; Nantista, C.; Wilson, P.B.
1990-06-01
Using rf pulse compression it will be possible to boost the 50- to 100-MW output expected from high-power microwave tubes operating in the 10- to 20-GHz frequency range, to the 300- to 1000-MW level required by the next generation of high-gradient linacs for linear for linear colliders. A high-power X-band three-stage binary rf pulse compressor has been implemented and operated at the Stanford Linear Accelerator Center (SLAC). In each of three successive stages, the rf pulse-length is compressed by half, and the peak power is approximately doubled. The experimental results presented here have been obtained at low-power (1-kW) and high-power (15-MW) input levels in initial testing with a TWT and a klystron. Rf pulses initially 770 nsec long have been compressed to 60 nsec. Peak power gains of 1.8 per stage, and 5.5 for three stages, have been measured. This corresponds to a peak power compression efficiency of about 90% per stage, or about 70% for three stages, consistent with the individual component losses. The principle of operation of a binary pulse compressor (BPC) is described in detail elsewhere. We recently have implemented and operated at SLAC a high-power (high-vacuum) three-stage X-band BPC. First results from the high-power three-stage BPC experiment are reported here
Adaptable Value-Set Analysis for Low-Level Code
Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.
2012-01-01
This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...
Binary optics: Trends and limitations
Farn, Michael W.; Veldkamp, Wilfrid B.
1993-01-01
We describe the current state of binary optics, addressing both the technology and the industry (i.e., marketplace). With respect to the technology, the two dominant aspects are optical design methods and fabrication capabilities, with the optical design problem being limited by human innovation in the search for new applications and the fabrication issue being limited by the availability of resources required to improve fabrication capabilities. With respect to the industry, the current marketplace does not favor binary optics as a separate product line and so we expect that companies whose primary purpose is the production of binary optics will not represent the bulk of binary optics production. Rather, binary optics' more natural role is as an enabling technology - a technology which will directly result in a competitive advantage in a company's other business areas - and so we expect that the majority of binary optics will be produced for internal use.
Particle acceleration in binaries
Directory of Open Access Journals (Sweden)
Sinitsyna V.G.
2017-01-01
Full Text Available Cygnus X-3 massive binary system is one of the powerful sources of radio and X-ray emission consisting of an accreting compact object, probably a black hole, with a Wolf-Rayet star companion. Based on the detections of ultra high energy gamma-rays by Kiel and Havera Park, Cygnus X-3 has been proposed to be one of the most powerful sources of charged cosmic ray particles in the Galaxy. The results of long-term observations of the Cyg X-3 binary at energies 800 GeV–85 TeV detected by SHALON in 1995 are presented with images, integral spectra and spectral energy distribution. The identification of source with Cygnus X-3 detected by SHALON was secured by the detection of its 4.8 hour orbital period in TeV gamma-rays. During the whole observation period of Cyg X-3 with SHALON significant flux increases were detected at energies above 0.8 TeV. These TeV flux increases are correlated with flaring activity at a lower energy range of X-ray and/or at observations of Fermi LAT as well as with radio emission from the relativistic jets of Cygnus X-3. The variability of very high-energy gamma-radiation and correlation of radiation activity in the wide energy range can provide essential information on particle mechanism production up to very high energies. Whereas, modulation of very high energy emission connected to the orbital motion of the binary system, provides an understanding of the emission processes, nature and location of particle acceleration.
Binaries traveling through a gaseous medium: dynamical drag forces and internal torques
Energy Technology Data Exchange (ETDEWEB)
Sánchez-Salcedo, F. J. [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ciudad Universitaria, Apt. Postal 70 264, C.P. 04510, Mexico City (Mexico); Chametla, Raul O., E-mail: jsanchez@astro.unam.mx [Escuela Superior de Física y Matemáticas, Instituto Politécnico Nacional, UP Adolfo López Mateos, Mexico City (Mexico)
2014-10-20
Using time-dependent linear theory, we investigate the morphology of the gravitational wake induced by a binary, whose center of mass moves at velocity V{sub cm} against a uniform background of gas. For simplicity, we assume that the components of the binary are on circular orbits about their common center of mass. The consequences of dynamical friction is twofold. First, gas dynamical friction may drag the center of mass of the binary and cause the binary to migrate. Second, drag forces also induce a braking torque, which causes the orbits of the components of the binary to shrink. We compute the drag forces acting on one component of the binary due to the gravitational interaction with its own wake. We show that the dynamical friction force responsible for decelerating the center of mass of the binary is smaller than it is in the point-mass case because of the loss of gravitational focusing. We show that the braking internal torque depends on the Mach numbers of each binary component about their center of mass, and also on the Mach number of the center of mass of the binary. In general, the internal torque decreases with increasing the velocity of the binary relative to the ambient gas cloud. However, this is not always the case. We also mention the relevance of our results to the period distribution of binaries.
Analysis and Design of Binary Message-Passing Decoders
DEFF Research Database (Denmark)
Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard
2012-01-01
Binary message-passing decoders for low-density parity-check (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the L-value domain. A hard decision channel results...... message-passing decoders. Finally, it is shown that errors on cycles consisting only of degree two and three variable nodes cannot be corrected and a necessary and sufficient condition for the existence of a cycle-free subgraph is derived....... in the well-know Gallager B algorithm, and increasing the output alphabet from hard decisions to two bits yields a gain of more than 1.0 dB in the required signal to noise ratio when using optimized codes. The code optimization requires adapting the mixing property of EXIT functions to the case of binary...
DEFF Research Database (Denmark)
2015-01-01
Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....
Multi-Messenger Astronomy: White Dwarf Binaries, LISA and GAIA
Bueno, Michael; Breivik, Katelyn; Larson, Shane L.
2017-01-01
The discovery of gravitational waves has ushered in a new era in astronomy. The low-frequency band covered by the future LISA detector provides unprecedented opportunities for multi-messenger astronomy. With the Global Astrometric Interferometer for Astrophysics (GAIA) mission, we expect to discover about 1,000 eclipsing binary systems composed of a WD and a main sequence star - a sizeable increase from the approximately 34 currently known binaries of this type. In advance of the first GAIA data release and the launch of LISA within the next decade, we used the Binary Stellar Evolution (BSE) code simulate the evolution of White Dwarf Binaries (WDB) in a fixed galaxy population of about 196,000 sources. Our goal is to assess the detectability of a WDB by LISA and GAIA using the parameters from our population synthesis, we calculate GW strength h, and apparent GAIA magnitude G. We can then use a scale factor to make a prediction of how many multi- messenger sources we expect to be detectable by both LISA and GAIA in a galaxy the size of the Milky Way. We create binaries 10 times to ensure randomness in distance assignment and average our results. We then determined whether or not astronomical chirp is the difference between the total chirp and the GW chirp. With Astronomical chirp and simulations of mass transfer and tides, we can gather more information about the internal astrophysics of stars in ultra-compact binary systems.
Binary black holes on a budget: simulations using workstations
International Nuclear Information System (INIS)
Marronetti, Pedro; Tichy, Wolfgang; Bruegmann, Bernd; Gonzalez, Jose; Hannam, Mark; Husa, Sascha; Sperhake, Ulrich
2007-01-01
Binary black hole simulations have traditionally been computationally very expensive: current simulations are performed in supercomputers involving dozens if not hundreds of processors, thus systematic studies of the parameter space of binary black hole encounters still seem prohibitive with current technology. Here we show how the multi-layered refinement level code BAM can be used on dual processor workstations to simulate certain binary black hole systems. BAM, based on the moving punctures method, provides grid structures composed of boxes of increasing resolution near the centre of the grid. In the case of binaries, the highest resolution boxes are placed around each black hole and they track them in their orbits until the final merger when a single set of levels surrounds the black hole remnant. This is particularly useful when simulating spinning black holes since the gravitational fields gradients are larger. We present simulations of binaries with equal mass black holes with spins parallel to the binary axis and intrinsic magnitude of S/m 2 = 0.75. Our results compare favourably to those of previous simulations of this particular system. We show that the moving punctures method produces stable simulations at maximum spatial resolutions up to M/160 and for durations of up to the equivalent of 20 orbital periods
Binary Cockroach Swarm Optimization for Combinatorial Optimization Problem
Directory of Open Access Journals (Sweden)
Ibidun Christiana Obagbuwa
2016-09-01
Full Text Available The Cockroach Swarm Optimization (CSO algorithm is inspired by cockroach social behavior. It is a simple and efficient meta-heuristic algorithm and has been applied to solve global optimization problems successfully. The original CSO algorithm and its variants operate mainly in continuous search space and cannot solve binary-coded optimization problems directly. Many optimization problems have their decision variables in binary. Binary Cockroach Swarm Optimization (BCSO is proposed in this paper to tackle such problems and was evaluated on the popular Traveling Salesman Problem (TSP, which is considered to be an NP-hard Combinatorial Optimization Problem (COP. A transfer function was employed to map a continuous search space CSO to binary search space. The performance of the proposed algorithm was tested firstly on benchmark functions through simulation studies and compared with the performance of existing binary particle swarm optimization and continuous space versions of CSO. The proposed BCSO was adapted to TSP and applied to a set of benchmark instances of symmetric TSP from the TSP library. The results of the proposed Binary Cockroach Swarm Optimization (BCSO algorithm on TSP were compared to other meta-heuristic algorithms.
Towers of generalized divisible quantum codes
Haah, Jeongwan
2018-04-01
A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
International Nuclear Information System (INIS)
Morales Mendoza, N.; Goyanes, S.; Chiliotte, C.; Bekeris, V.; Rubiolo, G.; Candal, R.
2012-01-01
Magnetic binary nanofillers containing multiwall carbon nanotubes (MWCNT) and hercynite were synthesized by Chemical Vapor Deposition (CVD) on Fe/AlOOH prepared by the sol-gel method. The catalyst precursor was fired at 450 °C, ground and sifted through different meshes. Two powders were obtained with different particle sizes: sample A (50-75 μm) and sample B (smaller than 50 μm). These powders are composed of iron oxide particles widely dispersed in the non-crystalline matrix of aluminum oxide and they are not ferromagnetic. After reduction process the powders are composed of α-Fe nanoparticles inside hercynite matrix. These nanofillers are composed of hercynite containing α-Fe nanoparticles and MWCNT. The binary magnetic nanofillers were slightly ferromagnetic. The saturation magnetization of the nanofillers depended on the powder particle size. The nanofiller obtained from powder particles in the range 50-75 μm showed a saturation magnetization 36% higher than the one formed from powder particles smaller than 50 μm. The phenomenon is explained in terms of changes in the magnetic environment of the particles as consequence of the presence of MWCNT.
Energy Technology Data Exchange (ETDEWEB)
Morales Mendoza, N. [INQUIMAE, CONICET-UBA, Ciudad Universitaria, Pab2, (C1428EHA) Bs As (Argentina); LPyMC, Dep. De Fisica, FCEN-UBA and IFIBA -CONICET, Ciudad Universitaria, Cap. Fed. (Argentina); Goyanes, S. [LPyMC, Dep. De Fisica, FCEN-UBA and IFIBA -CONICET, Ciudad Universitaria, Cap. Fed. (Argentina); Chiliotte, C.; Bekeris, V. [LBT, Dep. De Fisica, FCEN-UBA. Ciudad Universitaria, Pab1, C1428EGA CABA (Argentina); Rubiolo, G. [LPyMC, Dep. De Fisica, FCEN-UBA and IFIBA -CONICET, Ciudad Universitaria, Cap. Fed. (Argentina); Unidad de Actividad Materiales, CNEA, Av Gral. Paz 1499, San Martin (1650), Prov. de Bs As (Argentina); Candal, R., E-mail: candal@qi.fcen.uba.ar [INQUIMAE, CONICET-UBA, Ciudad Universitaria, Pab2, (C1428EHA) Bs As (Argentina); Escuela de Ciencia y Tecnologia, 3iA, Universidad de Gral. San Martin, San Martin, Prov. Bs As (Argentina)
2012-08-15
Magnetic binary nanofillers containing multiwall carbon nanotubes (MWCNT) and hercynite were synthesized by Chemical Vapor Deposition (CVD) on Fe/AlOOH prepared by the sol-gel method. The catalyst precursor was fired at 450 Degree-Sign C, ground and sifted through different meshes. Two powders were obtained with different particle sizes: sample A (50-75 {mu}m) and sample B (smaller than 50 {mu}m). These powders are composed of iron oxide particles widely dispersed in the non-crystalline matrix of aluminum oxide and they are not ferromagnetic. After reduction process the powders are composed of {alpha}-Fe nanoparticles inside hercynite matrix. These nanofillers are composed of hercynite containing {alpha}-Fe nanoparticles and MWCNT. The binary magnetic nanofillers were slightly ferromagnetic. The saturation magnetization of the nanofillers depended on the powder particle size. The nanofiller obtained from powder particles in the range 50-75 {mu}m showed a saturation magnetization 36% higher than the one formed from powder particles smaller than 50 {mu}m. The phenomenon is explained in terms of changes in the magnetic environment of the particles as consequence of the presence of MWCNT.
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections
International Nuclear Information System (INIS)
2007-01-01
1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table
Linear programming using Matlab
Ploskas, Nikolaos
2017-01-01
This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus. The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...
Binary CFG Rebuilt of Self-Modifying Codes
2016-10-03
often observed in a heterogeneous system, e.g., Java web applications querying external SQL servers . In the symbolic execution, this observation enables...destination server of EMDIVI, which caused huge information leak from Japanese governmental pension fund in 2015. The first year of the project, we...relatively heavy execution. For instance, BE-PUM automatically detects the destination server of EMDIVI, which caused huge information leak from Japanese
The Overcontact Binary V535 Aurigae: Well On Its Way to Coalescing?
Bradstreet, David H.; Sanders, S. J.; Hiebert-Crape, B.
2011-01-01
V535 Aurigae is a faint (12.8 mag) short period (9.23 hours) overcontact binary which has only one published light curve and no published analysis. 811 digital images in the V filter and 845 in R were obtained over seven nights in the winter of 2008 at the Bradstreet Observatory at Eastern University. The data were obtained using the Observatory's 41-cm telescope coupled with an SBIG ST-10XME CCD camera. Six new timings of minimum light were measured in order to analyze whether or not the system's period has changed since its discovery. Preliminary results tentatively indicated that the period had been decreasing linearly over the previous nine years since it was discovered. In order to confirm the large change in period the binary was observed in 2009 and 2010 resulting in complete V and R light curves in both seasons and an additional nine timings of minimum light. These timings confirm that V535 Aur has a dP/dt = -0.208 sec/yr, some twenty times greater than the average period change for overcontact systems. V535 Aur thus displays the largest decrease in period for any known overcontact system. The light curves were compiled into phased normal points and analyzed using the Binary Maker 3.0 light curve analysis software. These preliminary results were then fine tuned using the benchmark Wilson-Devinney code as implemented in Andrej Prsa's PHOEBE software suite. The analysis shows that the system is totally-eclipsing with a fillout of 85% and equal component temperatures. The extraordinarily large fillout (average for overcontacts 28%), equal stellar temperatures and very large period decrease may indicate that V535 Aur is well on its way to coalescing into a single star. The methods and results of the data acquisition, period study and light curve analysis will be presented in this poster.
Formation and Evolution of X-ray Binaries
Shao, Y.
2017-07-01
use of both binary population synthesis and detailed binary evolution calculations. We find that the birthrate is around 10-4 yr-1 for the incipient X-ray binaries in both cases. We demonstrate the distribution of the ULX population in the donor mass - orbital period plane. Our results suggest that, compared with black hole X-ray binaries, neutron star X-ray binaries may significantly contribute to the ULX population, and high/intermediate-mass X-ray binaries dominate the neutron star ULX population in M82/Milky Way-like galaxies, respectively. In Chapter 6, the population of intermediate- and low-mass X-ray binaries in the Galaxy is explored. We investigate the formation and evolutionary sequences of Galactic intermediate- and low-mass X-ray binaries by combining binary population synthesis (BPS) and detailed stellar evolutionary calculations. Using an updated BPS code we compute the evolution of massive binaries that leads to the formation of incipient I/LMXBs, and present their distribution in the initial donor mass vs. initial orbital period diagram. We then follow the evolution of I/LMXBs until the formation of binary millisecond pulsars (BMSPs). We show that during the evolution of I/LMXBs they are likely to be observed as relatively compact binaries. The resultant BMSPs have orbital periods ranging from about 1 day to a few hundred days. These features are consistent with observations of LMXBs and BMSPs. We also confirm the discrepancies between theoretical predictions and observations mentioned in the literature, that is, the theoretical average mass transfer rates of LMXBs are considerably lower than observed, and the number of BMSPs with orbital periods ˜ 0.1-1 \\unit{d} is severely underestimated. Both imply that something is missing in the modeling of LMXBs, which is likely to be related to the mechanisms of the orbital angular momentum loss. Finally in Chapter 7 we summarize our results and give the prospects for the future work.
DEFF Research Database (Denmark)
Cox, Geoff
Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...
Relativistic Binaries in Globular Clusters
Directory of Open Access Journals (Sweden)
Matthew J. Benacquista
2013-03-01
Full Text Available Galactic globular clusters are old, dense star systems typically containing 10^4 – 10^6 stars. As an old population of stars, globular clusters contain many collapsed and degenerate objects. As a dense population of stars, globular clusters are the scene of many interesting close dynamical interactions between stars. These dynamical interactions can alter the evolution of individual stars and can produce tight binary systems containing one or two compact objects. In this review, we discuss theoretical models of globular cluster evolution and binary evolution, techniques for simulating this evolution that leads to relativistic binaries, and current and possible future observational evidence for this population. Our discussion of globular cluster evolution will focus on the processes that boost the production of tight binary systems and the subsequent interaction of these binaries that can alter the properties of both bodies and can lead to exotic objects. Direct N-body integrations and Fokker–Planck simulations of the evolution of globular clusters that incorporate tidal interactions and lead to predictions of relativistic binary populations are also discussed. We discuss the current observational evidence for cataclysmic variables, millisecond pulsars, and low-mass X-ray binaries as well as possible future detection of relativistic binaries with gravitational radiation.
Elements of algebraic coding systems
Cardoso da Rocha, Jr, Valdemar
2014-01-01
Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...
NONLINEAR TIDES IN CLOSE BINARY SYSTEMS
International Nuclear Information System (INIS)
Weinberg, Nevin N.; Arras, Phil; Quataert, Eliot; Burkart, Josh
2012-01-01
We study the excitation and damping of tides in close binary systems, accounting for the leading-order nonlinear corrections to linear tidal theory. These nonlinear corrections include two distinct physical effects: three-mode nonlinear interactions, i.e., the redistribution of energy among stellar modes of oscillation, and nonlinear excitation of stellar normal modes by the time-varying gravitational potential of the companion. This paper, the first in a series, presents the formalism for studying nonlinear tides and studies the nonlinear stability of the linear tidal flow. Although the formalism we present is applicable to binaries containing stars, planets, and/or compact objects, we focus on non-rotating solar-type stars with stellar or planetary companions. Our primary results include the following: (1) The linear tidal solution almost universally used in studies of binary evolution is unstable over much of the parameter space in which it is employed. More specifically, resonantly excited internal gravity waves in solar-type stars are nonlinearly unstable to parametric resonance for companion masses M' ∼> 10-100 M ⊕ at orbital periods P ≈ 1-10 days. The nearly static 'equilibrium' tidal distortion is, however, stable to parametric resonance except for solar binaries with P ∼ 3 [P/10 days] for a solar-type star) and drives them as a single coherent unit with growth rates that are a factor of ≈N faster than the standard three-wave parametric instability. These are local instabilities viewed through the lens of global analysis; the coherent global growth rate follows local rates in the regions where the shear is strongest. In solar-type stars, the dynamical tide is unstable to this collective version of the parametric instability for even sub-Jupiter companion masses with P ∼< a month. (4) Independent of the parametric instability, the dynamical and equilibrium tides excite a wide range of stellar p-modes and g-modes by nonlinear inhomogeneous forcing
Spectral properties of binary asteroids
Pajuelo, Myriam; Birlan, Mirel; Carry, Benoît; DeMeo, Francesca E.; Binzel, Richard P.; Berthier, Jérôme
2018-04-01
We present the first attempt to characterize the distribution of taxonomic class among the population of binary asteroids (15% of all small asteroids). For that, an analysis of 0.8-2.5{μ m} near-infrared spectra obtained with the SpeX instrument on the NASA/IRTF is presented. Taxonomic class and meteorite analog is determined for each target, increasing the sample of binary asteroids with known taxonomy by 21%. Most binary systems are bound in the S-, X-, and C- classes, followed by Q and V-types. The rate of binary systems in each taxonomic class agrees within uncertainty with the background population of small near-Earth objects and inner main belt asteroids, but for the C-types which are under-represented among binaries.
Planets in Binary Star Systems
Haghighipour, Nader
2010-01-01
The discovery of extrasolar planets over the past decade has had major impacts on our understanding of the formation and dynamical evolution of planetary systems. There are features and characteristics unseen in our solar system and unexplainable by the current theories of planet formation and dynamics. Among these new surprises is the discovery of planets in binary and multiple-star systems. The discovery of such "binary-planetary" systems has confronted astrodynamicists with many new challenges, and has led them to re-examine the theories of planet formation and dynamics. Among these challenges are: How are planets formed in binary star systems? What would be the notion of habitability in such systems? Under what conditions can binary star systems have habitable planets? How will volatiles necessary for life appear on such planets? This volume seeks to gather the current research in the area of planets in binary and multistar systems and to familiarize readers with its associated theoretical and observation...
International Nuclear Information System (INIS)
Linsky, J.L.
1984-01-01
The author attempts to place in context the vast amount of data obtained in the last few years as a result of X-ray, ultraviolet, optical, and microwave observations of RS CVn and similar spectroscopic binary systems. He concentrates on the RS CVn systems and their long-period analogs, and restricts the scope by attempting to answer on the basis of the recent data and theory following questions: (1) Are the original defining characteristics still valid and still adequate? (2) What is the evidence for discrete active regions? (3) Have we derived any meaningful physical properties for the atmospheres of RS CVn systems? (4) What are the flare observations telling us about magnetic fields in the RS CVn systems? (5) Is there evidence for systematic trends in RS CVn systems with spectral type?
Unobserved Heterogeneity in the Binary Logit Model with Cross-Sectional Data and Short Panels
DEFF Research Database (Denmark)
Holm, Anders; Jæger, Mads Meier; Pedersen, Morten
This paper proposes a new approach to dealing with unobserved heterogeneity in applied research using the binary logit model with cross-sectional data and short panels. Unobserved heterogeneity is particularly important in non-linear regression models such as the binary logit model because, unlike...... in linear regression models, estimates of the effects of observed independent variables are biased even when omitted independent variables are uncorrelated with the observed independent variables. We propose an extension of the binary logit model based on a finite mixture approach in which we conceptualize...
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
International Nuclear Information System (INIS)
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Relativistic (3+1) dimensional hydrodynamic simulations of compact interacting binary systems
International Nuclear Information System (INIS)
Mathews, G.J.; Evans, C.R.; Wilson, J.R.
1986-09-01
We discuss the development of a relativistic hydrodynamic code for describing the evolution of astrophysical systems in three spatial dimensions. The application of this code to several test problems is presented. Preliminary results from the simulation of the dynamics of accreting binary white dwarf and neutron star systems are discussed. 14 refs., 4 figs
Multilevel Cross-Dependent Binary Longitudinal Data
Serban, Nicoleta
2013-10-16
We provide insights into new methodology for the analysis of multilevel binary data observed longitudinally, when the repeated longitudinal measurements are correlated. The proposed model is logistic functional regression conditioned on three latent processes describing the within- and between-variability, and describing the cross-dependence of the repeated longitudinal measurements. We estimate the model components without employing mixed-effects modeling but assuming an approximation to the logistic link function. The primary objectives of this article are to highlight the challenges in the estimation of the model components, to compare two approximations to the logistic regression function, linear and exponential, and to discuss their advantages and limitations. The linear approximation is computationally efficient whereas the exponential approximation applies for rare events functional data. Our methods are inspired by and applied to a scientific experiment on spectral backscatter from long range infrared light detection and ranging (LIDAR) data. The models are general and relevant to many new binary functional data sets, with or without dependence between repeated functional measurements.
List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes
DEFF Research Database (Denmark)
Hernando, Fernando; Høholdt, Tom; Ruano, Diego
2012-01-01
A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....
Non-linear M -sequences Generation Method
Directory of Open Access Journals (Sweden)
Z. R. Garifullina
2011-06-01
Full Text Available The article deals with a new method for modeling a pseudorandom number generator based on R-blocks. The gist of the method is the replacement of a multi digit XOR element by a stochastic adder in a parallel binary linear feedback shift register scheme.
Period variation studies of six contact binaries in M4
Rukmini, Jagirdar; Shanti Priya, Devarapalli
2018-04-01
We present the first period study of six contact binaries in the closest globular cluster M4 the data collected from June 1995‑June 2009 and Oct 2012‑Sept 2013. New times of minima are determined for all the six variables and eclipse timing (O-C) diagrams along with the quadratic fit are presented. For all the variables, the study of (O-C) variations reveals changes in the periods. In addition, the fundamental parameters for four of the contact binaries obtained using the Wilson-Devinney code (v2003) are presented. Planned observations of these binaries using the 3.6-m Devasthal Optical Telescope (DOT) and the 4-m International Liquid Mirror Telescope (ILMT) operated by the Aryabhatta Research Institute of Observational Sciences (ARIES; Nainital) can throw light on their evolutionary status from long term period variation studies.
Topology of black hole binary-single interactions
Samsing, Johan; Ilan, Teva
2018-05-01
We present a study on how the outcomes of binary-single interactions involving three black holes (BHs) distribute as a function of the initial conditions; a distribution we refer to as the topology. Using a N-body code that includes BH finite sizes and gravitational wave (GW) emission in the equation of motion (EOM), we perform more than a million binary-single interactions to explore the topology of both the Newtonian limit and the limit at which general relativistic (GR) effects start to become important. From these interactions, we are able to describe exactly under which conditions BH collisions and eccentric GW capture mergers form, as well as how GR in general modifies the Newtonian topology. This study is performed on both large- and microtopological scales. We further describe how the inclusion of GW emission in the EOM naturally leads to scenarios where the binary-single system undergoes two successive GW mergers.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Explicit MDS Codes with Complementary Duals
DEFF Research Database (Denmark)
Beelen, Duals Peter; Jin, Lingfei
2018-01-01
In 1964, Massey introduced a class of codes with complementary duals which are called Linear Complimentary Dual (LCD for short) codes. He showed that LCD codes have applications in communication system, side-channel attack (SCA) and so on. LCD codes have been extensively studied in literature....... On the other hand, MDS codes form an optimal family of classical codes which have wide applications in both theory and practice. The main purpose of this paper is to give an explicit construction of several classes of LCD MDS codes, using tools from algebraic function fields. We exemplify this construction...
Kneser-Hecke-operators in coding theory
Nebe, Gabriele
2005-01-01
The Kneser-Hecke-operator is a linear operator defined on the complex vector space spanned by the equivalence classes of a family of self-dual codes of fixed length. It maps a linear self-dual code $C$ over a finite field to the formal sum of the equivalence classes of those self-dual codes that intersect $C$ in a codimension 1 subspace. The eigenspaces of this self-adjoint linear operator may be described in terms of a coding-theory analogue of the Siegel $\\Phi $-operator.
List Decoding of Algebraic Codes
DEFF Research Database (Denmark)
Nielsen, Johan Sebastian Rosenkilde
We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... Hermitian codes using Guruswami–Sudan or Power decoding faster than previously known, and we show how to Wu list decode binary Goppa codes....... to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation. The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we...
Directory of Open Access Journals (Sweden)
Anthony McCosker
2014-03-01
Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.
Energy Technology Data Exchange (ETDEWEB)
Timchalk, Chuck; Poet, Torka S.
2008-05-01
Physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) models have been developed and validated for the organophosphorus (OP) insecticides chlorpyrifos (CPF) and diazinon (DZN). Based on similar pharmacokinetic and mode of action properties it is anticipated that these OPs could interact at a number of important metabolic steps including: CYP450 mediated activation/detoxification, and blood/tissue cholinesterase (ChE) binding/inhibition. We developed a binary PBPK/PD model for CPF, DZN and their metabolites based on previously published models for the individual insecticides. The metabolic interactions (CYP450) between CPF and DZN were evaluated in vitro and suggests that CPF is more substantially metabolized to its oxon metabolite than is DZN. These data are consistent with their observed in vivo relative potency (CPF>DZN). Each insecticide inhibited the other’s in vitro metabolism in a concentration-dependent manner. The PBPK model code used to described the metabolism of CPF and DZN was modified to reflect the type of inhibition kinetics (i.e. competitive vs. non-competitive). The binary model was then evaluated against previously published rodent dosimetry and ChE inhibition data for the mixture. The PBPK/PD model simulations of the acute oral exposure to single- (15 mg/kg) vs. binary-mixtures (15+15 mg/kg) of CFP and DZN at this lower dose resulted in no differences in the predicted pharmacokinetics of either the parent OPs or their respective metabolites; whereas, a binary oral dose of CPF+DZN at 60+60 mg/kg did result in observable changes in the DZN pharmacokinetics. Cmax was more reasonably fit by modifying the absorption parameters. It is anticipated that at low environmentally relevant binary doses, most likely to be encountered in occupational or environmental related exposures, that the pharmacokinetics are expected to be linear, and ChE inhibition dose-additive.
Wang, Jim Jing-Yan; Gao, Xin
2014-01-01
Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.
Wang, Jim Jing-Yan
2014-07-06
Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.
Entanglement-assisted quantum MDS codes from negacyclic codes
Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang
2018-03-01
The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.
Optimally cloned binary coherent states
Müller, C. R.; Leuchs, G.; Marquardt, Ch.; Andersen, U. L.
2017-10-01
Binary coherent state alphabets can be represented in a two-dimensional Hilbert space. We capitalize this formal connection between the otherwise distinct domains of qubits and continuous variable states to map binary phase-shift keyed coherent states onto the Bloch sphere and to derive their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal cloner.
Toric Codes, Multiplicative Structure and Decoding
DEFF Research Database (Denmark)
Hansen, Johan Peder
2017-01-01
Long linear codes constructed from toric varieties over finite fields, their multiplicative structure and decoding. The main theme is the inherent multiplicative structure on toric codes. The multiplicative structure allows for \\emph{decoding}, resembling the decoding of Reed-Solomon codes and al...
Monomial codes seen as invariant subspaces
Directory of Open Access Journals (Sweden)
García-Planas María Isabel
2017-08-01
Full Text Available It is well known that cyclic codes are very useful because of their applications, since they are not computationally expensive and encoding can be easily implemented. The relationship between cyclic codes and invariant subspaces is also well known. In this paper a generalization of this relationship is presented between monomial codes over a finite field and hyperinvariant subspaces of n under an appropriate linear transformation. Using techniques of Linear Algebra it is possible to deduce certain properties for this particular type of codes, generalizing known results on cyclic codes.
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
DNA Barcoding through Quaternary LDPC Codes.
Directory of Open Access Journals (Sweden)
Elizabeth Tapia
Full Text Available For many parallel applications of Next-Generation Sequencing (NGS technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH or have intrinsic poor error correcting abilities (Hamming. Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9 at the expense of a rate of read losses just in the order of 10(-6.
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
International Nuclear Information System (INIS)
Cullen, D.E.
1979-01-01
Program LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form (i.e., removes points not needed for linear interpolability). The main advantage of the code is that it allows subsequent codes to consider only linear-linear data. A listing of the source deck is available on request
Menu-Driven Solver Of Linear-Programming Problems
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Optical analogue of relativistic Dirac solitons in binary waveguide arrays
Energy Technology Data Exchange (ETDEWEB)
Tran, Truong X., E-mail: truong.tran@mpl.mpg.de [Department of Physics, Le Quy Don University, 236 Hoang Quoc Viet str., 10000 Hanoi (Viet Nam); Max Planck Institute for the Science of Light, Günther-Scharowsky str. 1, 91058 Erlangen (Germany); Longhi, Stefano [Department of Physics, Politecnico di Milano and Istituto di Fotonica e Nanotecnologie del Consiglio Nazionale delle Ricerche, Piazza L. da Vinci 32, I-20133 Milano (Italy); Biancalana, Fabio [Max Planck Institute for the Science of Light, Günther-Scharowsky str. 1, 91058 Erlangen (Germany); School of Engineering and Physical Sciences, Heriot-Watt University, EH14 4AS Edinburgh (United Kingdom)
2014-01-15
We study analytically and numerically an optical analogue of Dirac solitons in binary waveguide arrays in the presence of Kerr nonlinearity. Pseudo-relativistic soliton solutions of the coupled-mode equations describing dynamics in the array are analytically derived. We demonstrate that with the found soliton solutions, the coupled mode equations can be converted into the nonlinear relativistic 1D Dirac equation. This paves the way for using binary waveguide arrays as a classical simulator of quantum nonlinear effects arising from the Dirac equation, something that is thought to be impossible to achieve in conventional (i.e. linear) quantum field theory. -- Highlights: •An optical analogue of Dirac solitons in nonlinear binary waveguide arrays is suggested. •Analytical solutions to pseudo-relativistic solitons are presented. •A correspondence of optical coupled-mode equations with the nonlinear relativistic Dirac equation is established.
Evidence of a stable binary CdCa quasicrystalline phase
DEFF Research Database (Denmark)
Jiang, Jianzhong; Jensen, C.H.; Rasmussen, A.R.
2001-01-01
Quasicrystals with a primitive icosahedral structure and a quasilattice constant of 5.1215 Angstrom have been synthesized in a binary Cd-Ca system. The thermal stability of the quasicrystal has been investigated by in situ high-temperature x-ray powder diffraction using synchrotron radiation. It ....... It is demonstrated that the binary CdCa quasicrystal is thermodynamic stable up to its melting temperature. The linear thermal expansion coefficient of the quasicrystal is 2.765x10(-5) K-1. (C) 2001 American Institute of Physics.......Quasicrystals with a primitive icosahedral structure and a quasilattice constant of 5.1215 Angstrom have been synthesized in a binary Cd-Ca system. The thermal stability of the quasicrystal has been investigated by in situ high-temperature x-ray powder diffraction using synchrotron radiation...
Feedback equivalence of convolutional codes over finite rings
Directory of Open Access Journals (Sweden)
DeCastro-García Noemí
2017-12-01
Full Text Available The approach to convolutional codes from the linear systems point of view provides us with effective tools in order to construct convolutional codes with adequate properties that let us use them in many applications. In this work, we have generalized feedback equivalence between families of convolutional codes and linear systems over certain rings, and we show that every locally Brunovsky linear system may be considered as a representation of a code under feedback convolutional equivalence.
Evans, Nancy R.; Bond, H. E.; Schaefer, G.; Mason, B. D.; Karovska, M.; Tingle, E.
2013-01-01
Cepheids (5 Msun stars) provide an excellent sample for determining the binary properties of fairly massive stars. International Ultraviolet Explorer (IUE) observations of Cepheids brighter than 8th magnitude resulted in a list of ALL companions more massive than 2.0 Msun uniformly sensitive to all separations. Hubble Space Telescope Wide Field Camera 3 (WFC3) has resolved three of these binaries (Eta Aql, S Nor, and V659 Cen). Combining these separations with orbital data in the literature, we derive an unbiased distribution of binary separations for a sample of 18 Cepheids, and also a distribution of mass ratios. The distribution of orbital periods shows that the 5 Msun binaries prefer shorter periods than 1 Msun stars, reflecting differences in star formation processes.
Mesoscopic model for binary fluids
Echeverria, C.; Tucci, K.; Alvarez-Llamoza, O.; Orozco-Guillén, E. E.; Morales, M.; Cosenza, M. G.
2017-10-01
We propose a model for studying binary fluids based on the mesoscopic molecular simulation technique known as multiparticle collision, where the space and state variables are continuous, and time is discrete. We include a repulsion rule to simulate segregation processes that does not require calculation of the interaction forces between particles, so binary fluids can be described on a mesoscopic scale. The model is conceptually simple and computationally efficient; it maintains Galilean invariance and conserves the mass and energy in the system at the micro- and macro-scale, whereas momentum is conserved globally. For a wide range of temperatures and densities, the model yields results in good agreement with the known properties of binary fluids, such as the density profile, interface width, phase separation, and phase growth. We also apply the model to the study of binary fluids in crowded environments with consistent results.
Portmanteau constructions, phrase structure and linearization
Directory of Open Access Journals (Sweden)
Brian Hok-Shing Chan
2015-12-01
Full Text Available In bilingual code-switching which involves language-pairs with contrasting head-complement orders (i.e. head-initial vs head-final, a head may be lexicalized from both languages with its complement sandwiched in the middle. These so-called portmanteau sentences (Nishimura, 1985, 1986; Sankoff, Poplack, and Vanniarajan, 1990, etc. have been attested for decades, but they had never received a systematic, formal analysis in terms of current syntactic theory before a few recent attempts (Hicks, 2010, 2012. Notwithstanding this lack of attention, these structures are in fact highly relevant to theories of linearization and phrase structure. More specifically, they challenge binary-branching (Kayne, 1994, 2004, 2005 as well as the Antisymmetry hypothesis (ibid.. Not explained by current grammatical models of code-switching, including the Equivalence Constraint (Poplack, 1980, the Matrix Language Frame Model (Myers-Scotton, 1993, 2002, etc., and the Bilingual Speech Model (Muysken, 2000, 2013, the portmanteau construction indeed looks uncommon or abnormal, defying any systematic account. However, the recurrence of these structures in various datasets and constraints on them do call for an explanation. This paper suggests an account which lies with syntax and also with the psycholinguistics of bilingualism. Assuming that linearization is a process at the Sensori-Motor (SM interface (Chomsky, 2005; 2013, this paper sees that word order is not fixed in a syntactic tree but it is set in the production process, and much information of word order rests in the processor, for instance, outputting a head before its complement (i.e. head-initial word order or the reverse (i.e. head-final word order. As for the portmanteau construction, it is the output of bilingual speakers co-activating two sets of head-complement orders which summon the phonetic forms of the same word in both languages. Under this proposal, the underlying structure of a portmanteau
Portmanteau Constructions, Phrase Structure, and Linearization.
Chan, Brian Hok-Shing
2015-01-01
In bilingual code-switching which involves language-pairs with contrasting head-complement orders (i.e., head-initial vs. head-final), a head may be lexicalized from both languages with its complement sandwiched in the middle. These so-called "portmanteau" sentences (Nishimura, 1985, 1986; Sankoff et al., 1990, etc.) have been attested for decades, but they had never received a systematic, formal analysis in terms of current syntactic theory before a few recent attempts (Hicks, 2010, 2012). Notwithstanding this lack of attention, these structures are in fact highly relevant to theories of linearization and phrase structure. More specifically, they challenge binary-branching (Kayne, 1994, 2004, 2005) as well as the Antisymmetry hypothesis (ibid.). Not explained by current grammatical models of code-switching, including the Equivalence Constraint (Poplack, 1980), the Matrix Language Frame Model (Myers-Scotton, 1993, 2002, etc.), and the Bilingual Speech Model (Muysken, 2000, 2013), the portmanteau construction indeed looks uncommon or abnormal, defying any systematic account. However, the recurrence of these structures in various datasets and constraints on them do call for an explanation. This paper suggests an account which lies with syntax and also with the psycholinguistics of bilingualism. Assuming that linearization is a process at the Sensori-Motor (SM) interface (Chomsky, 2005, 2013), this paper sees that word order is not fixed in a syntactic tree but it is set in the production process, and much information of word order rests in the processor, for instance, outputting a head before its complement (i.e., head-initial word order) or the reverse (i.e., head-final word order). As for the portmanteau construction, it is the output of bilingual speakers co-activating two sets of head-complement orders which summon the phonetic forms of the same word in both languages. Under this proposal, the underlying structure of a portmanteau construction is as simple as an
Some properties of spectral binary stars
International Nuclear Information System (INIS)
Krajcheva, Z.T.; Popova, E.I.; Tutukov, A.V.; Yungel'son, L.R.; AN SSSR, Moscow. Astronomicheskij Sovet)
1978-01-01
Statistical investigations of spectra binary stars are carried out. Binary systems consisting of main sequence stars are considered. For 826 binary stars masses of components, ratios of component masses, semiaxes of orbits and orbital angular momenta are calculated. The distributions of these parameters and their correlations are analyzed. The dependences of statistical properties of spectral binary stars on their origin and evolution are discussed
International Nuclear Information System (INIS)
Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.
2017-09-01
This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)
Prsa, A.; Zwitter, T.
2004-01-01
Eclipsing binaries are extremely attractive objects because absolute physical parameters (masses, luminosities, radii) of both components may be determined from observations. Since most efforts to extract these parameters were based on dedicated observing programs, existing modeling code is based on interactivity. Gaia will make a revolutionary advance in shear number of observed eclipsing binaries and new methods for automatic handling must be introduced and thoroughly tested. This paper foc...
Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources
DEFF Research Database (Denmark)
Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.
2012-01-01
This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....
DEFF Research Database (Denmark)
Soon, Winnie; Cox, Geoff
2018-01-01
a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...
International Nuclear Information System (INIS)
Rattan, D.S.
1993-11-01
NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases
Binary Systems and the Initial Mass Function
Malkov, O. Yu.
2017-07-01
In the present paper we discuss advantages and disadvantages of binary stars, which are important for star formation history determination. We show that to make definite conclusions of the initial mass function shape, it is necessary to study binary population well enough to correct the luminosity function for unresolved binaries; to construct the mass-luminosity relation based on wide binaries data, and to separate observational mass functions of primaries, of secondaries, and of unresolved binaries.
Variable-energy drift-tube linear accelerator
Swenson, Donald A.; Boyd, Jr., Thomas J.; Potter, James M.; Stovall, James E.
1984-01-01
A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.
Be discs in coplanar circular binaries: Phase-locked variations of emission lines
Panoglou, Despina; Faes, Daniel M.; Carciofi, Alex C.; Okazaki, Atsuo T.; Baade, Dietrich; Rivinius, Thomas; Borges Fernandes, Marcelo
2018-01-01
In this paper, we present the first results of radiative transfer calculations on decretion discs of binary Be stars. A smoothed particle hydrodynamics code computes the structure of Be discs in coplanar circular binary systems for a range of orbital and disc parameters. The resulting disc configuration consists of two spiral arms, and this can be given as input into a Monte Carlo code, which calculates the radiative transfer along the line of sight for various observational coordinates. Making use of the property of steady disc structure in coplanar circular binaries, observables are computed as functions of the orbital phase. Some orbital-phase series of line profiles are given for selected parameter sets under various viewing angles, to allow comparison with observations. Flat-topped profiles with and without superimposed multiple structures are reproduced, showing, for example, that triple-peaked profiles do not have to be necessarily associated with warped discs and misaligned binaries. It is demonstrated that binary tidal effects give rise to phase-locked variability of the violet-to-red (V/R) ratio of hydrogen emission lines. The V/R ratio exhibits two maxima per cycle; in certain cases those maxima are equal, leading to a clear new V/R cycle every half orbital period. This study opens a way to identifying binaries and to constraining the parameters of binary systems that exhibit phase-locked variations induced by tidal interaction with a companion star.
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Constructing snake-in-the-box codes and families of such codes covering the hypercube
Haryanto, L.
2007-01-01
A snake-in-the-box code (or snake) is a list of binary words of length n such that each word differs from its successor in the list in precisely one bit position. Moreover, any two words in the list differ in at least two positions, unless they are neighbours in the list. The list is considered to
Language Recognition via Sparse Coding
2016-09-08
explanation is that sparse coding can achieve a near-optimal approximation of much complicated nonlinear relationship through local and piecewise linear...training examples, where x(i) ∈ RN is the ith example in the batch. Optionally, X can be normalized and whitened before sparse coding for better result...normalized input vectors are then ZCA- whitened [20]. Em- pirically, we choose ZCA- whitening over PCA- whitening , and there is no dimensionality reduction
DEFF Research Database (Denmark)
Ejsing-Duun, Stine; Hansbøl, Mikala
Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.
International Nuclear Information System (INIS)
Lindemuth, I.R.
1979-01-01
This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.
Directory of Open Access Journals (Sweden)
Amin Asadi
2017-10-01
Full Text Available Purpose: To study the benefits of Directional Bremsstrahlung Splitting (DBS dose variance reduction technique in BEAMnrc Monte Carlo (MC code for Oncor® linac at 6MV and 18MV energies. Materials and Method: A MC model of Oncor® linac was built using BEAMnrc MC Code and verified by the measured data for 6MV and 18MV energies of various field sizes. Then Oncor® machine was modeled running DBS technique, and the efficiency of total fluence and spatial fluence for electron and photon, the efficiency of dose variance reduction of MC calculations for PDD on the central beam axis and lateral dose profile across the nominal field was measured and compared. Result: With applying DBS technique, the total fluence of electron and photon increased in turn 626.8 (6MV and 983.4 (6MV, and 285.6 (18MV and 737.8 (18MV, the spatial fluence of electron and photon improved in turn 308.6±1.35% (6MV and 480.38±0.43% (6MV, and 153±0.9% (18MV and 462.6±0.27% (18MV. Moreover, by running DBS technique, the efficiency of dose variance reduction for PDD MC dose calculations before maximum dose point and after dose maximum point enhanced 187.8±0.68% (6MV and 184.6±0.65% (6MV, 156±0.43% (18MV and 153±0.37% (18MV, respectively, and the efficiency of MC calculations for lateral dose profile remarkably on the central beam axis and across the treatment field raised in turn 197±0.66% (6MV and 214.6±0.73% (6MV, 175±0.36% (18MV and 181.4±0.45% (18MV. Conclusion: Applying dose variance reduction technique of DBS for modeling Oncor® linac with using BEAMnrc MC Code surprisingly improved the fluence of electron and photon, and it therefore enhanced the efficiency of dose variance reduction for MC calculations. As a result, running DBS in different kinds of MC simulation Codes might be beneficent in reducing the uncertainty of MC calculations.
Shock waves in binary oxides memristors
Tesler, Federico; Tang, Shao; Dobrosavljević, Vladimir; Rozenberg, Marcelo
2017-09-01
Progress of silicon based technology is nearing its physical limit, as minimum feature size of components is reaching a mere 5 nm. The resistive switching behavior of transition metal oxides and the associated memristor device is emerging as a competitive technology for next generation electronics. Significant progress has already been made in the past decade and devices are beginning to hit the market; however, it has been mainly the result of empirical trial and error. Hence, gaining theoretical insight is of essence. In the present work we report a new connection between the resistive switching and shock wave formation, a classic topic of non-linear dynamics. We argue that the profile of oxygen ions that migrate during the commutation in insulating binary oxides may form a shock wave, which propagates through a poorly conductive region of the device. We validate the scenario by means of model simulations.
Detecting unresolved binary stars in Euclid VIS images
Kuntzer, T.; Courbin, F.
2017-10-01
Measuring a weak gravitational lensing signal to the level required by the next generation of space-based surveys demands exquisite reconstruction of the point-spread function (PSF). However, unresolved binary stars can significantly distort the PSF shape. In an effort to mitigate this bias, we aim at detecting unresolved binaries in realistic Euclid stellar populations. We tested methods in numerical experiments where (I) the PSF shape is known to Euclid requirements across the field of view; and (II) the PSF shape is unknown. We drew simulated catalogues of PSF shapes for this proof-of-concept paper. Following the Euclid survey plan, the objects were observed four times. We propose three methods to detect unresolved binary stars. The detection is based on the systematic and correlated biases between exposures of the same object. One method is a simple correlation analysis, while the two others use supervised machine-learning algorithms (random forest and artificial neural network). In both experiments, we demonstrate the ability of our methods to detect unresolved binary stars in simulated catalogues. The performance depends on the level of prior knowledge of the PSF shape and the shape measurement errors. Good detection performances are observed in both experiments. Full complexity, in terms of the images and the survey design, is not included, but key aspects of a more mature pipeline are discussed. Finding unresolved binaries in objects used for PSF reconstruction increases the quality of the PSF determination at arbitrary positions. We show, using different approaches, that we are able to detect at least binary stars that are most damaging for the PSF reconstruction process. The code corresponding to the algorithms used in this work and all scripts to reproduce the results are publicly available from a GitHub repository accessible via http://lastro.epfl.ch/software
Hidden slow pulsars in binaries
Tavani, Marco; Brookshaw, Leigh
1993-01-01
The recent discovery of the binary containing the slow pulsar PSR 1718-19 orbiting around a low-mass companion star adds new light on the characteristics of binary pulsars. The properties of the radio eclipses of PSR 1718-19 are the most striking observational characteristics of this system. The surface of the companion star produces a mass outflow which leaves only a small 'window' in orbital phase for the detection of PSR 1718-19 around 400 MHz. At this observing frequency, PSR 1718-19 is clearly observable only for about 1 hr out of the total 6.2 hr orbital period. The aim of this Letter is twofold: (1) to model the hydrodynamical behavior of the eclipsing material from the companion star of PSR 1718-19 and (2) to argue that a population of binary slow pulsars might have escaped detection in pulsar surveys carried out at 400 MHz. The possible existence of a population of partially or totally hidden slow pulsars in binaries will have a strong impact on current theories of binary evolution of neutron stars.
Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes
Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman
2011-01-01
Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...
Fitting Markovian binary trees using global and individual demographic data
Hautphenne, Sophie; Massaro, Melanie; Turner, Katharine
2017-01-01
We consider a class of branching processes called Markovian binary trees, in which the individuals lifetime and reproduction epochs are modeled using a transient Markovian arrival process (TMAP). We estimate the parameters of the TMAP based on population data containing information on age-specific fertility and mortality rates. Depending on the degree of detail of the available data, a weighted non-linear regression method or a maximum likelihood method is applied. We discuss the optimal choi...
Accuracy of inference on the physics of binary evolution from gravitational-wave observations
Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya
2018-04-01
The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion, and mass-loss rates during the luminous blue variable and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.
Comparison of many bodied and binary collision cascade models up to 1 keV
International Nuclear Information System (INIS)
Schwartz, D.M.; Schiffgens, J.D.; Doran, D.G.; Odette, G.R.; Ariyasu, R.G.
1976-01-01
A quasi-dynamical code ADDES has been developed to model displacement cascades in copper for primary knockon atom energies up to several keV. ADDES is like a dynamical code in that it employs a many body treatment, yet similar to a binary collision code in that it incorporates the basic assumption that energy transfers below several eV can be ignored in describing cascade evolution. This paper is primarily concerned with (1) a continuing effort to validate the assumptions and specific parameters in the code by the comparison of ADDES results with experiment and with results from a dynamical code, and (2) comparisons of ADDES results with those from a binary collision code. The directional dependence of the displacement threshold is in reasonable agreement with the measurements of Jung et al. The behavior of focused replacement sequences is very similar to that obtained with the dynamical codes GRAPE and COMENT. Qualitative agreement was found between ADDES and COMENT for a higher energy (500 eV) defocused event while differences, still under study, are apparent in a 250 eV high index event. Comparisons of ADDES with the binary collision code MARLOWE show surprisingly good agreement in the 250 to 1000 eV range for both number and separation of Frenkel pairs. A preliminary observation, perhaps significant to displacement calculations utilizing the concept of a mean displacement energy, is the dissipation of 300 to 400 eV in a replacement sequence producing a single interstitial
Advanced hardware design for error correcting codes
Coussy, Philippe
2015-01-01
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.
International Nuclear Information System (INIS)
Altomare, S.; Minton, G.
1975-02-01
PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)
International Nuclear Information System (INIS)
Gara, P.; Martin, E.
1983-01-01
The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr
Khina, Anatoly
2016-08-15
We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.
The Young Visual Binary Survey
Prato, Lisa; Avilez, Ian; Lindstrom, Kyle; Graham, Sean; Sullivan, Kendall; Biddle, Lauren; Skiff, Brian; Nofi, Larissa; Schaefer, Gail; Simon, Michal
2018-01-01
Differences in the stellar and circumstellar properties of the components of young binaries provide key information about star and disk formation and evolution processes. Because objects with separations of a few to a few hundred astronomical units share a common environment and composition, multiple systems allow us to control for some of the factors which play into star formation. We are completing analysis of a rich sample of about 100 pre-main sequence binaries and higher order multiples, primarily located in the Taurus and Ophiuchus star forming regions. This poster will highlight some of out recent, exciting results. All reduced spectra and the results of our analysis will be publicly available to the community at http://jumar.lowell.edu/BinaryStars/. Support for this research was provided in part by NSF award AST-1313399 and by NASA Keck KPDA funding.
Directory of Open Access Journals (Sweden)
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Overloaded CDMA Systems with Displaced Binary Signatures
Directory of Open Access Journals (Sweden)
Vanhaverbeke Frederik
2004-01-01
Full Text Available We extend three types of overloaded CDMA systems, by displacing in time the binary signature sequences of these systems: (1 random spreading (PN, (2 multiple-OCDMA (MO, and (3 PN/OCDMA (PN/O. For each of these systems, we determine the time shifts that minimize the overall multiuser interference power. The achievable channel load with coded and uncoded data is evaluated for the conventional (without displacement and improved (with displacement systems, as well as for systems based on quasi-Welch-bound-equality (QWBE sequences, by means of several types of turbo detectors. For each system, the best performing turbo detector is selected in order to compare the performance of these systems. It is found that the improved systems substantially outperform their original counterparts. With uncoded data, (improved PN/O yields the highest acceptable channel load. For coded data, MO allows for the highest acceptable channel load over all considered systems, both for the conventional and the improved systems. In the latter case, channel loads of about 280% are achievable with a low degradation as compared to a single user system.
Implementation of LT codes based on chaos
International Nuclear Information System (INIS)
Zhou Qian; Li Liang; Chen Zengqiang; Zhao Jiaxiang
2008-01-01
Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator. (general)
An implicit Smooth Particle Hydrodynamic code
Energy Technology Data Exchange (ETDEWEB)
Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)
2000-05-01
An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.
Best linear decoding of random mask images
International Nuclear Information System (INIS)
Woods, J.W.; Ekstrom, M.P.; Palmieri, T.M.; Twogood, R.E.
1975-01-01
In 1968 Dicke proposed coded imaging of x and γ rays via random pinholes. Since then, many authors have agreed with him that this technique can offer significant image improvement. A best linear decoding of the coded image is presented, and its superiority over the conventional matched filter decoding is shown. Experimental results in the visible light region are presented. (U.S.)
International linear collider simulations using BDSIM
Indian Academy of Sciences (India)
BDSIM is a Geant4 [1] extension toolkit for the simulation of particle transport in accelerator beamlines. It is a code that combines accelerator-style particle tracking with traditional Geant-style tracking based on Runga–Kutta techniques. A more detailed description of the code can be found in [2]. In an e+e− linear collider ...
The effectiveness of correcting codes in reception in the whole in additive normal white noise
Shtarkov, Y. M.
1974-01-01
Some possible criteria for estimating the effectiveness of correcting codes are presented, and the energy effectiveness of correcting codes is studied for symbol-by-symbol reception. Expressions for the energetic effectiveness of binary correcting codes for reception in the whole are produced. Asymptotic energetic effectiveness and finite signal/noise ratio cases are considered.
Evolution in close binary systems
International Nuclear Information System (INIS)
Yungel'son, L.R.; Masevich, A.G.
1983-01-01
Duality is the property most typical of stars. If one investigates how prevalent double stars are, making due allowance for selection effects, one finds that as many as 90 percent of all stars are paired. Contrary to tradition it is single stars that are out of the ordinary, and as will be shown presently even some of these may have been formed by coalescence of the members of binary systems. This review deals with the evolution of close binaries, defined as double-star systems whose evolution entails exchange of material between the two components
PHYSICS OF ECLIPSING BINARIES. II. TOWARD THE INCREASED MODEL FIDELITY
Energy Technology Data Exchange (ETDEWEB)
Prša, A.; Conroy, K. E.; Horvat, M.; Kochoska, A.; Hambleton, K. M. [Villanova University, Dept. of Astrophysics and Planetary Sciences, 800 E Lancaster Avenue, Villanova PA 19085 (United States); Pablo, H. [Université de Montréal, Pavillon Roger-Gaudry, 2900, boul. Édouard-Montpetit Montréal QC H3T 1J4 (Canada); Bloemen, S. [Radboud University Nijmegen, Department of Astrophysics, IMAPP, P.O. Box 9010, 6500 GL, Nijmegen (Netherlands); Giammarco, J. [Eastern University, Dept. of Astronomy and Physics, 1300 Eagle Road, St. Davids, PA 19087 (United States); Degroote, P. [KU Leuven, Instituut voor Sterrenkunde, Celestijnenlaan 200D, B-3001 Heverlee (Belgium)
2016-12-01
The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures, and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but were not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. In particular, we discuss triangulation as a superior surface discretization algorithm, meshing of rotating single stars, light travel time effects, advanced phase computation, volume conservation in eccentric orbits, and improved computation of local intensity across the stellar surfaces that includes the photon-weighted mode, the enhanced limb darkening treatment, the better reflection treatment, and Doppler boosting. Here we present the concepts on which PHOEBE is built and proofs of concept that demonstrate the increased model fidelity.
Accuracy of Binary Black Hole Waveform Models for Advanced LIGO
Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team
2016-03-01
Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.
The gravitational-wave memory from eccentric binaries
International Nuclear Information System (INIS)
Favata, Marc
2011-01-01
The nonlinear gravitational-wave memory causes a time-varying but nonoscillatory correction to the gravitational-wave polarizations. It arises from gravitational-waves that are sourced by gravitational-waves. Previous considerations of the nonlinear memory effect have focused on quasicircular binaries. Here I consider the nonlinear memory from Newtonian orbits with arbitrary eccentricity. Expressions for the waveform polarizations and spin-weighted spherical-harmonic modes are derived for elliptic, hyperbolic, parabolic, and radial orbits. In the hyperbolic, parabolic, and radial cases the nonlinear memory provides a 2.5 post-Newtonian (PN) correction to the leading-order waveforms. This is in contrast to the elliptical and quasicircular cases, where the nonlinear memory corrects the waveform at leading (0PN) order. This difference in PN order arises from the fact that the memory builds up over a short ''scattering'' time scale in the hyperbolic case, as opposed to a much longer radiation-reaction time scale in the elliptical case. The nonlinear memory corrections presented here complete our knowledge of the leading-order (Peters-Mathews) waveforms for elliptical orbits. These calculations are also relevant for binaries with quasicircular orbits in the present epoch which had, in the past, large eccentricities. Because the nonlinear memory depends sensitively on the past evolution of a binary, I discuss the effect of this early-time eccentricity on the value of the late-time memory in nearly circularized binaries. I also discuss the observability of large ''memory jumps'' in a binary's past that could arise from its formation in a capture process. Lastly, I provide estimates of the signal-to-noise ratio of the linear and nonlinear memories from hyperbolic and parabolic binaries.
Hermitian self-dual quasi-abelian codes
Directory of Open Access Journals (Sweden)
Herbert S. Palines
2017-12-01
Full Text Available Quasi-abelian codes constitute an important class of linear codes containing theoretically and practically interesting codes such as quasi-cyclic codes, abelian codes, and cyclic codes. In particular, the sub-class consisting of 1-generator quasi-abelian codes contains large families of good codes. Based on the well-known decomposition of quasi-abelian codes, the characterization and enumeration of Hermitian self-dual quasi-abelian codes are given. In the case of 1-generator quasi-abelian codes, we offer necessary and sufficient conditions for such codes to be Hermitian self-dual and give a formula for the number of these codes. In the case where the underlying groups are some $p$-groups, the actual number of resulting Hermitian self-dual quasi-abelian codes are determined.
Numerical Simulations of Wind Accretion in Symbiotic Binaries
de Val-Borro, M.; Karovska, M.; Sasselov, D.
2009-08-01
About half of the binary systems are close enough to each other for mass to be exchanged between them at some point in their evolution, yet the accretion mechanism in wind accreting binaries is not well understood. We study the dynamical effects of gravitational focusing by a binary companion on winds from late-type stars. In particular, we investigate the mass transfer and formation of accretion disks around the secondary in detached systems consisting of an asymptotic giant branch (AGB) mass-losing star and an accreting companion. The presence of mass outflows is studied as a function of mass-loss rate, wind temperature, and binary orbital parameters. A two-dimensional hydrodynamical model is used to study the stability of mass transfer in wind accreting symbiotic binary systems. In our simulations we use an adiabatic equation of state and a modified version of the isothermal approximation, where the temperature depends on the distance from the mass losing star and its companion. The code uses a block-structured adaptive mesh refinement method that allows us to have high resolution at the position of the secondary and resolve the formation of bow shocks and accretion disks. We explore the accretion flow between the components and formation of accretion disks for a range of orbital separations and wind parameters. Our results show the formation of stream flow between the stars and accretion disks of various sizes for certain orbital configurations. For a typical slow and massive wind from an AGB star the flow pattern is similar to a Roche lobe overflow with accretion rates of 10% of the mass loss from the primary. Stable disks with exponentially decreasing density profiles and masses of the order 10-4 solar masses are formed when wind acceleration occurs at several stellar radii. The disks are geometrically thin with eccentric streamlines and close to Keplerian velocity profiles. The formation of tidal streams and accretion disks is found to be weakly dependent on
NUMERICAL SIMULATIONS OF WIND ACCRETION IN SYMBIOTIC BINARIES
International Nuclear Information System (INIS)
De Val-Borro, M.; Karovska, M.; Sasselov, D.
2009-01-01
About half of the binary systems are close enough to each other for mass to be exchanged between them at some point in their evolution, yet the accretion mechanism in wind accreting binaries is not well understood. We study the dynamical effects of gravitational focusing by a binary companion on winds from late-type stars. In particular, we investigate the mass transfer and formation of accretion disks around the secondary in detached systems consisting of an asymptotic giant branch (AGB) mass-losing star and an accreting companion. The presence of mass outflows is studied as a function of mass-loss rate, wind temperature, and binary orbital parameters. A two-dimensional hydrodynamical model is used to study the stability of mass transfer in wind accreting symbiotic binary systems. In our simulations we use an adiabatic equation of state and a modified version of the isothermal approximation, where the temperature depends on the distance from the mass losing star and its companion. The code uses a block-structured adaptive mesh refinement method that allows us to have high resolution at the position of the secondary and resolve the formation of bow shocks and accretion disks. We explore the accretion flow between the components and formation of accretion disks for a range of orbital separations and wind parameters. Our results show the formation of stream flow between the stars and accretion disks of various sizes for certain orbital configurations. For a typical slow and massive wind from an AGB star the flow pattern is similar to a Roche lobe overflow with accretion rates of 10% of the mass loss from the primary. Stable disks with exponentially decreasing density profiles and masses of the order 10 -4 solar masses are formed when wind acceleration occurs at several stellar radii. The disks are geometrically thin with eccentric streamlines and close to Keplerian velocity profiles. The formation of tidal streams and accretion disks is found to be weakly dependent
The Classification of Complementary Information Set Codes of Lengths 14 and 16
Freibert, Finley
2012-01-01
In the paper "A new class of codes for Boolean masking of cryptographic computations," Carlet, Gaborit, Kim, and Sol\\'{e} defined a new class of rate one-half binary codes called \\emph{complementary information set} (or CIS) codes. The authors then classified all CIS codes of length less than or equal to 12. CIS codes have relations to classical Coding Theory as they are a generalization of self-dual codes. As stated in the paper, CIS codes also have important practical applications as they m...
From concatenated codes to graph codes
DEFF Research Database (Denmark)
Justesen, Jørn; Høholdt, Tom
2004-01-01
We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...
MONTE CARLO SIMULATIONS OF GLOBULAR CLUSTER EVOLUTION. V. BINARY STELLAR EVOLUTION
International Nuclear Information System (INIS)
Chatterjee, Sourav; Umbreit, Stefan; Rasio, Frederic A.; Fregeau, John M.
2010-01-01
We study the dynamical evolution of globular clusters containing primordial binaries, including full single and binary stellar evolution using our Monte Carlo cluster evolution code updated with an adaptation of the single and binary stellar evolution codes SSE and BSE from Hurley et al. We describe the modifications that we have made to the code. We present several test calculations and comparisons with existing studies to illustrate the validity of the code. We show that our code finds very good agreement with direct N-body simulations including primordial binaries and stellar evolution. We find significant differences in the evolution of the global properties of the simulated clusters using stellar evolution compared with simulations without any stellar evolution. In particular, we find that the mass loss from the stellar evolution acts as a significant energy production channel simply by reducing the total gravitational binding energy and can significantly prolong the initial core contraction phase before reaching the binary-burning quasi-steady state of the cluster evolution. We simulate a large grid of models varying the initial cluster mass, binary fraction, and concentration parameter, and we compare properties of the simulated clusters with those of the observed Galactic globular clusters (GGCs). We find that simply including stellar evolution in our simulations and assuming the typical initial cluster half-mass radius is approximately a few pc independent of mass, our simulated cluster properties agree well with the observed GGC properties such as the core radius and the ratio of the core radius to the half-mass radius. We explore in some detail qualitatively different clusters in different phases of their evolution and construct synthetic Hertzsprung-Russell diagrams for these clusters.
Rose, Mike
2008-01-01
As any reader of "About Campus" knows, binary oppositions contribute to the definitions of institutional types--the trade school versus the liberal arts college, for example. They help define disciplines and subdisciplines and the status differentials among them: consider the difference in intellectual cachet as one moves from linguistics to…
Optimally cloned binary coherent states
DEFF Research Database (Denmark)
Mueller, C. R.; Leuchs, G.; Marquardt, Ch
2017-01-01
their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal...
Armas Padilla, M.
2013-01-01
The discovery of the first X-ray binary, Scorpius X-1, by Giacconi et al. (1962), marked the birth of X-ray astronomy. Following that discovery, many additional X-ray sources where found with the first generation of X-ray rockets and observatories (e.g., UHURU and Einstein). The short-timescale
Misclassification in binary choice models
Czech Academy of Sciences Publication Activity Database
Meyer, B. D.; Mittag, Nikolas
2017-01-01
Roč. 200, č. 2 (2017), s. 295-311 ISSN 0304-4076 R&D Projects: GA ČR(CZ) GJ16-07603Y Institutional support: Progres-Q24 Keywords : measurement error * binary choice models * program take-up Subject RIV: AH - Economics OBOR OECD: Economic Theory Impact factor: 1.633, year: 2016
Misclassification in binary choice models
Czech Academy of Sciences Publication Activity Database
Meyer, B. D.; Mittag, Nikolas
2017-01-01
Roč. 200, č. 2 (2017), s. 295-311 ISSN 0304-4076 Institutional support: RVO:67985998 Keywords : measurement error * binary choice models * program take-up Subject RIV: AH - Economics OBOR OECD: Economic Theory Impact factor: 1.633, year: 2016
International Nuclear Information System (INIS)
Zapatrin, R.R.
1992-01-01
Given a finite ortholattice L, the *-semigroup is explicitly built whose annihilator ortholattice is isomorphic to L. Thus, it is shown that any finite quantum logic is the additive part of a binary logic. Some areas of possible applications are outlined. 7 refs
Simplified Linear Equation Solvers users manual
Energy Technology Data Exchange (ETDEWEB)
Gropp, W. [Argonne National Lab., IL (United States); Smith, B. [California Univ., Los Angeles, CA (United States)
1993-02-01
The solution of large sparse systems of linear equations is at the heart of many algorithms in scientific computing. The SLES package is a set of easy-to-use yet powerful and extensible routines for solving large sparse linear systems. The design of the package allows new techniques to be used in existing applications without any source code changes in the applications.
LFSC - Linac Feedback Simulation Code
Energy Technology Data Exchange (ETDEWEB)
Ivanov, Valentin; /Fermilab
2008-05-01
The computer program LFSC (
Vazquez, A I; Gianola, D; Bates, D; Weigel, K A; Heringstad, B
2009-02-01
Clinical mastitis is typically coded as presence/absence during some period of exposure, and records are analyzed with linear or binary data models. Because presence includes cows with multiple episodes, there is loss of information when a count is treated as a binary response. The Poisson model is designed for counting random variables, and although it is used extensively in epidemiology of mastitis, it has rarely been used for studying the genetics of mastitis. Many models have been proposed for genetic analysis of mastitis, but they have not been formally compared. The main goal of this study was to compare linear (Gaussian), Bernoulli (with logit link), and Poisson models for the purpose of genetic evaluation of sires for mastitis in dairy cattle. The response variables were clinical mastitis (CM; 0, 1) and number of CM cases (NCM; 0, 1, 2, ..). Data consisted of records on 36,178 first-lactation daughters of 245 Norwegian Red sires distributed over 5,286 herds. Predictive ability of models was assessed via a 3-fold cross-validation using mean squared error of prediction (MSEP) as the end-point. Between-sire variance estimates for NCM were 0.065 in Poisson and 0.007 in the linear model. For CM the between-sire variance was 0.093 in logit and 0.003 in the linear model. The ratio between herd and sire variances for the models with NCM response was 4.6 and 3.5 for Poisson and linear, respectively, and for model for CM was 3.7 in both logit and linear models. The MSEP for all cows was similar. However, within healthy animals, MSEP was 0.085 (Poisson), 0.090 (linear for NCM), 0.053 (logit), and 0.056 (linear for CM). For mastitic animals the MSEP values were 1.206 (Poisson), 1.185 (linear for NCM response), 1.333 (logit), and 1.319 (linear for CM response). The models for count variables had a better performance when predicting diseased animals and also had a similar performance between them. Logit and linear models for CM had better predictive ability for healthy
New coding technique for computer generated holograms.
Haskell, R. E.; Culver, B. C.
1972-01-01
A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.
Ensemble Weight Enumerators for Protograph LDPC Codes
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
APPLICATION OF GAS DYNAMICAL FRICTION FOR PLANETESIMALS. II. EVOLUTION OF BINARY PLANETESIMALS
Energy Technology Data Exchange (ETDEWEB)
Grishin, Evgeni; Perets, Hagai B. [Physics Department, Technion—Israel Institute of Technology, Haifa, 3200003 (Israel)
2016-04-01
One of the first stages of planet formation is the growth of small planetesimals and their accumulation into large planetesimals and planetary embryos. This early stage occurs long before the dispersal of most of the gas from the protoplanetary disk. At this stage gas–planetesimal interactions play a key role in the dynamical evolution of single intermediate-mass planetesimals (m{sub p} ∼ 10{sup 21}–10{sup 25} g) through gas dynamical friction (GDF). A significant fraction of all solar system planetesimals (asteroids and Kuiper-belt objects) are known to be binary planetesimals (BPs). Here, we explore the effects of GDF on the evolution of BPs embedded in a gaseous disk using an N-body code with a fiducial external force accounting for GDF. We find that GDF can induce binary mergers on timescales shorter than the disk lifetime for masses above m{sub p} ≳ 10{sup 22} g at 1 au, independent of the binary initial separation and eccentricity. Such mergers can affect the structure of merger-formed planetesimals, and the GDF-induced binary inspiral can play a role in the evolution of the planetesimal disk. In addition, binaries on eccentric orbits around the star may evolve in the supersonic regime, where the torque reverses and the binary expands, which would enhance the cross section for planetesimal encounters with the binary. Highly inclined binaries with small mass ratios, evolve due to the combined effects of Kozai–Lidov (KL) cycles with GDF which lead to chaotic evolution. Prograde binaries go through semi-regular KL evolution, while retrograde binaries frequently flip their inclination and ∼50% of them are destroyed.
Polarimetry and spectrophotometry of the massive close binary DH Cephei
International Nuclear Information System (INIS)
Corcoran, M.F.
1988-01-01
DH Cep is a massive and close binary and a member of the young open cluster NGC 7380. Spectroscopically, this system is double-lined, classified as type O6 + O6. Photometrically, the system has been known to show small light variations phase-locked to the radial-velocity variations; these light variations characterize the star as an ellipsoidal variable. Four-color linear polarimetry, archival UV spectra taken by IUE and x-ray measures obtained by the Einstein satellite provide the first detailed analysis of this important system. Polarization measures demonstrate the (largely non-phase locked) variability of the circum-binary scattering environment, identify the scattering medium as electrons and indicate a large-scale change in the intrinsic polarization of the system. UV spectral analysis is used to determine the composite photospheric temperature, the component masses and spectral classifications, the degree of mass loss, and the distribution of interstellar matter along the line of sight to the binary. Measures obtained by the Einstein satellite of the x-ray emission from the system indicate that DH Cep is a strong source of hard x-rays. A model of the binary is developed
Modeling binary correlated responses using SAS, SPSS and R
Wilson, Jeffrey R
2015-01-01
Statistical tools to analyze correlated binary data are spread out in the existing literature. This book makes these tools accessible to practitioners in a single volume. Chapters cover recently developed statistical tools and statistical packages that are tailored to analyzing correlated binary data. The authors showcase both traditional and new methods for application to health-related research. Data and computer programs will be publicly available in order for readers to replicate model development, but learning a new statistical language is not necessary with this book. The inclusion of code for R, SAS, and SPSS allows for easy implementation by readers. For readers interested in learning more about the languages, though, there are short tutorials in the appendix. Accompanying data sets are available for download through the book s website. Data analysis presented in each chapter will provide step-by-step instructions so these new methods can be readily applied to projects. Researchers and graduate stu...
Tracer diffusion study in binary alloys
International Nuclear Information System (INIS)
Bocquet, Jean-Louis
1973-01-01
The diffusional properties of dilute alloys are quite well described with 5 vacancy jump frequencies: the diffusion experiments allow as to determine only 3 jump frequency ratios. The first experiment set, found by Howard and Manning, was used in order to determine the 3 frequency ratios in the dilute Cu-Fe alloy. N.V. Doan has shown that the isotope effect measurements may be replaced by easier electromigration experiments: this new method was used with success for the dilute Ag-Zn and Ag-Cd alloys. Two effects which take place in less dilute alloys cannot be explained with the 5 frequency model, these are: the linear enhancement of solute diffusion and the departure from linear enhancement of solvent diffusion versus solute concentration. To explain these effects, we have had to take account of the influence of solute pairs on diffusion via 53 new vacancy jump frequencies. Diffusion in a concentrated alloy can be described with a quasi-chemical approach: we show that a description with 'surrounded atoms' allows the simultaneous explanation of the thermodynamical properties of the binary solid solution, the dependence of atomic jump frequencies with respect to the local concentration of the alloy. In this model, the two atomic species have a jump frequency spectrum at their disposal, which seems to greatly modify Manning's correlation analysis. (author) [fr
Astronomy of binary and multiple stars
International Nuclear Information System (INIS)
Tokovinin, A.A.
1984-01-01
Various types of binary stars and methods for their observation are described in a popular form. Some models of formation and evolution of binary and multiple star systems are presented. It is concluded that formation of binary and multiple stars is a regular stage in the process of star production
Coevolution of Binaries and Circumbinary Gaseous Disks
Fleming, David; Quinn, Thomas R.
2018-04-01
The recent discoveries of circumbinary planets by Kepler raise questions for contemporary planet formation models. Understanding how these planets form requires characterizing their formation environment, the circumbinary protoplanetary disk, and how the disk and binary interact. The central binary excites resonances in the surrounding protoplanetary disk that drive evolution in both the binary orbital elements and in the disk. To probe how these interactions impact both binary eccentricity and disk structure evolution, we ran N-body smooth particle hydrodynamics (SPH) simulations of gaseous protoplanetary disks surrounding binaries based on Kepler 38 for 10^4 binary orbital periods for several initial binary eccentricities. We find that nearly circular binaries weakly couple to the disk via a parametric instability and excite disk eccentricity growth. Eccentric binaries strongly couple to the disk causing eccentricity growth for both the disk and binary. Disks around sufficiently eccentric binaries strongly couple to the disk and develop an m = 1 spiral wave launched from the 1:3 eccentric outer Lindblad resonance (EOLR). This wave corresponds to an alignment of gas particle longitude of periastrons. We find that in all simulations, the binary semi-major axis decays due to dissipation from the viscous disk.
Formation and evolution of compact binaries
Sluijs, Marcel Vincent van der
2006-01-01
In this thesis we investigate the formation and evolution of compact binaries. Chapters 2 through 4 deal with the formation of luminous, ultra-compact X-ray binaries in globular clusters. We show that the proposed scenario of magnetic capture produces too few ultra-compact X-ray binaries to explain
Displacement measurement system for linear array detector
International Nuclear Information System (INIS)
Zhang Pengchong; Chen Ziyu; Shen Ji
2011-01-01
It presents a set of linear displacement measurement system based on encoder. The system includes displacement encoders, optical lens and read out circuit. Displacement read out unit includes linear CCD and its drive circuit, two amplifier circuits, second order Butterworth low-pass filter and the binarization circuit. The coding way is introduced, and various parts of the experimental signal waveforms are given, and finally a linear experimental test results are given. The experimental results are satisfactory. (authors)
The Jeans Condition and Collapsing Molecular Cloud Cores: Filaments or Binaries?
International Nuclear Information System (INIS)
Boss, Alan P.; Fisher, Robert T.; Klein, Richard I.; McKee, Christopher F.
2000-01-01
The 1997 and 1998 studies by Truelove and colleagues introduced the Jeans condition as a necessary condition for avoiding artificial fragmentation during protostellar collapse calculations. They found that when the Jeans condition was properly satisfied with their adaptive mesh refinement (AMR) code, an isothermal cloud with an initial Gaussian density profile collapsed to form a thin filament rather than the binary or quadruple protostar systems found in previous calculations. Using a completely different self-gravitational hydrodynamics code introduced by Boss and Myhill in 1992 (B and M), we present here calculations that reproduce the filamentary solution first obtained by Truelove et al. in 1997. The filamentary solution only emerged with very high spatial resolution with the B and M code, with effectively 12,500 radial grid points (R12500). Reproducing the filamentary collapse solution appears to be an excellent means for testing the reliability of self-gravitational hydrodynamics codes, whether grid-based or particle-based. We then show that in the more physically realistic case of an identical initial cloud with nonisothermal heating (calculated in the Eddington approximation with code B and M), thermal retardation of the collapse permits the Gaussian cloud to fragment into a binary protostar system at the same maximum density where the isothermal collapse yields a thin filament. However, the binary clumps soon thereafter evolve into a central clump surrounded by spiral arms containing two more clumps. A roughly similar evolution is obtained using the AMR code with a barotropic equation of state--formation of a transient binary, followed by decay of the binary to form a central object surrounded by spiral arms, though in this case the spiral arms do not form clumps. When the same barotropic equation of state is used with the B and M code, the agreement with the initial phases of the AMR calculation is quite good, showing that these two codes yield mutually
Code Samples Used for Complexity and Control
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
Algebraic coding theory over finite commutative rings
Dougherty, Steven T
2017-01-01
This book provides a self-contained introduction to algebraic coding theory over finite Frobenius rings. It is the first to offer a comprehensive account on the subject. Coding theory has its origins in the engineering problem of effective electronic communication where the alphabet is generally the binary field. Since its inception, it has grown as a branch of mathematics, and has since been expanded to consider any finite field, and later also Frobenius rings, as its alphabet. This book presents a broad view of the subject as a branch of pure mathematics and relates major results to other fields, including combinatorics, number theory and ring theory. Suitable for graduate students, the book will be of interest to anyone working in the field of coding theory, as well as algebraists and number theorists looking to apply coding theory to their own work.
On quadratic residue codes and hyperelliptic curves
Directory of Open Access Journals (Sweden)
David Joyner
2008-01-01
Full Text Available For an odd prime p and each non-empty subset S⊂GF(p, consider the hyperelliptic curve X S defined by y 2 =f S (x, where f S (x = ∏ a∈S (x-a. Using a connection between binary quadratic residue codes and hyperelliptic curves over GF(p, this paper investigates how coding theory bounds give rise to bounds such as the following example: for all sufficiently large primes p there exists a subset S⊂GF(p for which the bound |X S (GF(p| > 1.39p holds. We also use the quasi-quadratic residue codes defined below to construct an example of a formally self-dual optimal code whose zeta function does not satisfy the ``Riemann hypothesis.''
Heller, René
2018-03-01
The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.
Joslin, Ronald D.; Streett, Craig L.; Chang, Chau-Lyan
1992-01-01
Spatially evolving instabilities in a boundary layer on a flat plate are computed by direct numerical simulation (DNS) of the incompressible Navier-Stokes equations. In a truncated physical domain, a nonstaggered mesh is used for the grid. A Chebyshev-collocation method is used normal to the wall; finite difference and compact difference methods are used in the streamwise direction; and a Fourier series is used in the spanwise direction. For time stepping, implicit Crank-Nicolson and explicit Runge-Kutta schemes are used to the time-splitting method. The influence-matrix technique is used to solve the pressure equation. At the outflow boundary, the buffer-domain technique is used to prevent convective wave reflection or upstream propagation of information from the boundary. Results of the DNS are compared with those from both linear stability theory (LST) and parabolized stability equation (PSE) theory. Computed disturbance amplitudes and phases are in very good agreement with those of LST (for small inflow disturbance amplitudes). A measure of the sensitivity of the inflow condition is demonstrated with both LST and PSE theory used to approximate inflows. Although the DNS numerics are very different than those of PSE theory, the results are in good agreement. A small discrepancy in the results that does occur is likely a result of the variation in PSE boundary condition treatment in the far field. Finally, a small-amplitude wave triad is forced at the inflow, and simulation results are compared with those of LST. Again, very good agreement is found between DNS and LST results for the 3-D simulations, the implication being that the disturbance amplitudes are sufficiently small that nonlinear interactions are negligible.
Applications of Coding in Network Communications
Chang, Christopher SungWook
2012-01-01
This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…
Constructing binary black hole initial data with high mass ratios and spins
Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald; Szilagyi, Bela; Simulating Extreme Spacetimes Collaboration
2015-04-01
Binary black hole systems have now been successfully modelled in full numerical relativity by many groups. In order to explore high-mass-ratio (larger than 1:10), high-spin systems (above 0.9 of the maximal BH spin), we revisit the initial-data problem for binary black holes. The initial-data solver in the Spectral Einstein Code (SpEC) was not able to solve for such initial data reliably and robustly. I will present recent improvements to this solver, among them adaptive mesh refinement and control of motion of the center of mass of the binary, and will discuss the much larger region of parameter space this code can now address.
GPU accelerated manifold correction method for spinning compact binaries
Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying
2018-04-01
The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.
Permutation Entropy for Random Binary Sequences
Directory of Open Access Journals (Sweden)
Lingfeng Liu
2015-12-01
Full Text Available In this paper, we generalize the permutation entropy (PE measure to binary sequences, which is based on Shannon’s entropy, and theoretically analyze this measure for random binary sequences. We deduce the theoretical value of PE for random binary sequences, which can be used to measure the randomness of binary sequences. We also reveal the relationship between this PE measure with other randomness measures, such as Shannon’s entropy and Lempel–Ziv complexity. The results show that PE is consistent with these two measures. Furthermore, we use PE as one of the randomness measures to evaluate the randomness of chaotic binary sequences.
Emission-line diagnostics of nearby H II regions including interacting binary populations
Xiao, Lin; Stanway, Elizabeth R.; Eldridge, J. J.
2018-06-01
We present numerical models of the nebular emission from H II regions around young stellar populations over a range of compositions and ages. The synthetic stellar populations include both single stars and interacting binary stars. We compare these models to the observed emission lines of 254 H II regions of 13 nearby spiral galaxies and 21 dwarf galaxies drawn from archival data. The models are created using the combination of the BPASS (Binary Population and Spectral Synthesis) code with the photoionization code CLOUDY to study the differences caused by the inclusion of interacting binary stars in the stellar population. We obtain agreement with the observed emission line ratios from the nearby star-forming regions and discuss the effect of binary-star evolution pathways on the nebular ionization of H II regions. We find that at population ages above 10 Myr, single-star models rapidly decrease in flux and ionization strength, while binary-star models still produce strong flux and high [O III]/H β ratios. Our models can reproduce the metallicity of H II regions from spiral galaxies, but we find higher metallicities than previously estimated for the H II regions from dwarf galaxies. Comparing the equivalent width of H β emission between models and observations, we find that accounting for ionizing photon leakage can affect age estimates for H II regions. When it is included, the typical age derived for H II regions is 5 Myr from single-star models, and up to 10 Myr with binary-star models. This is due to the existence of binary-star evolution pathways, which produce more hot Wolf-Rayet and helium stars at older ages. For future reference, we calculate new BPASS binary maximal starburst lines as a function of metallicity, and for the total model population, and present these in Appendix A.
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
Massive Binary Black Holes in the Cosmic Landscape
Colpi, Monica; Dotti, Massimo
2011-02-01
Binary black holes occupy a special place in our quest for understanding the evolution of galaxies along cosmic history. If massive black holes grow at the center of (pre-)galactic structures that experience a sequence of merger episodes, then dual black holes form as inescapable outcome of galaxy assembly, and can in principle be detected as powerful dual quasars. But, if the black holes reach coalescence, during their inspiral inside the galaxy remnant, then they become the loudest sources of gravitational waves ever in the universe. The Laser Interferometer Space Antenna is being developed to reveal these waves that carry information on the mass and spin of these binary black holes out to very large look-back times. Nature seems to provide a pathway for the formation of these exotic binaries, and a number of key questions need to be addressed: How do massive black holes pair in a merger? Depending on the properties of the underlying galaxies, do black holes always form a close Keplerian binary? If a binary forms, does hardening proceed down to the domain controlled by gravitational wave back reaction? What is the role played by gas and/or stars in braking the black holes, and on which timescale does coalescence occur? Can the black holes accrete on flight and shine during their pathway to coalescence? After outlining key observational facts on dual/binary black holes, we review the progress made in tracing their dynamics in the habitat of a gas-rich merger down to the smallest scales ever probed with the help of powerful numerical simulations. N-Body/hydrodynamical codes have proven to be vital tools for studying their evolution, and progress in this field is expected to grow rapidly in the effort to describe, in full realism, the physics of stars and gas around the black holes, starting from the cosmological large scale of a merger. If detected in the new window provided by the upcoming gravitational wave experiments, binary black holes will provide a deep view
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.