Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8
Sison, Virgilio; Remillion, Monica
2017-10-01
Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.
Generating Constant Weight Binary Codes
Knight, D.G.
2008-01-01
The determination of bounds for A(n, d, w), the maximum possible number of binary vectors of length n, weight w, and pairwise Hamming distance no less than d, is a classic problem in coding theory. Such sets of vectors have many applications. A description is given of how the problem can be used in a first-year undergraduate computational…
Some Bounds on Binary LCD Codes
Galvez, Lucky; Kim, Jon-Lark; Lee, Nari; Roe, Young Gun; Won, Byung-Sun
2017-01-01
A linear code with a complementary dual (or LCD code) is defined to be a linear code $C$ whose dual code $C^{\\perp}$ satisfies $C \\cap C^{\\perp}$= $\\left\\{ \\mathbf{0}\\right\\} $. Let $LCD{[}n,k{]}$ denote the maximum of possible values of $d$ among $[n,k,d]$ binary LCD codes. We give exact values of $LCD{[}n,k{]}$ for $1 \\le k \\le n \\le 12$. We also show that $LCD[n,n-i]=2$ for any $i\\geq2$ and $n\\geq2^{i}$. Furthermore, we show that $LCD[n,k]\\leq LCD[n,k-1]$ for $k$ odd and $LCD[n,k]\\leq LCD[...
Random linear codes in steganography
Directory of Open Access Journals (Sweden)
Kamil Kaczyński
2016-12-01
Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB
Directory of Open Access Journals (Sweden)
Rumen Daskalov
2017-07-01
Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].
Decoding Algorithms for Random Linear Network Codes
DEFF Research Database (Denmark)
Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank
2011-01-01
We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially...... achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...
Statistics of clusters in binary linear lattices
Felderhof, B.U.
The statistics of clusters in binary linear lattices is studied on the assumption that the relative weight of an Al or Bm cluster is determined only by its size l or m, and is independent of the location of the cluster on the chain. The average cluster numbers and the variance of their fluctuations
Squares of Random Linear Codes
DEFF Research Database (Denmark)
Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego
2015-01-01
a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise...... computation of the number of quadratic forms of a given rank, and the number of their zeros....
Linear network error correction coding
Guang, Xuan
2014-01-01
There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an
A decoding method of an n length binary BCH code through (n + 1n length binary cyclic code
Directory of Open Access Journals (Sweden)
TARIQ SHAH
2013-09-01
Full Text Available For a given binary BCH code Cn of length n = 2 s - 1 generated by a polynomial of degree r there is no binary BCH code of length (n + 1n generated by a generalized polynomial of degree 2r. However, it does exist a binary cyclic code C (n+1n of length (n + 1n such that the binary BCH code Cn is embedded in C (n+1n . Accordingly a high code rate is attained through a binary cyclic code C (n+1n for a binary BCH code Cn . Furthermore, an algorithm proposed facilitates in a decoding of a binary BCH code Cn through the decoding of a binary cyclic code C (n+1n , while the codes Cn and C (n+1n have the same minimum hamming distance.
Forms and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
We present a general theory to obtain linear network codes utilizing forms and obtain explicit families of equidimensional vector spaces, in which any pair of distinct vector spaces intersect in the same small dimension. The theory is inspired by the methods of the author utilizing the osculating...... spaces of Veronese varieties. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possibly altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The vector spaces in our construction are equidistant in the above metric and the distance between any pair of vector spaces is large making...
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Speech coding code- excited linear prediction
Bäckström, Tom
2017-01-01
This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
Linear codes associated to determinantal varieties
DEFF Research Database (Denmark)
Beelen, Peter; Ghorpade, Sudhir R.; Hasan, Sartaj Ul
2015-01-01
We consider a class of linear codes associated to projective algebraic varieties defined by the vanishing of minors of a fixed size of a generic matrix. It is seen that the resulting code has only a small number of distinct weights. The case of varieties defined by the vanishing of 2×2 minors...... of matrices of rank 1 in a linear space of matrices of a given dimension over a finite field. In particular, we determine the structure and the maximum possible dimension of linear spaces of matrices in which every nonzero matrix has rank 1....
Binary codes with impulse autocorrelation functions for dynamic experiments
International Nuclear Information System (INIS)
Corran, E.R.; Cummins, J.D.
1962-09-01
A series of binary codes exist which have autocorrelation functions approximating to an impulse function. Signals whose behaviour in time can be expressed by such codes have spectra which are 'whiter' over a limited bandwidth and for a finite time than signals from a white noise generator. These codes are used to determine system dynamic responses using the correlation technique. Programmes have been written to compute codes of arbitrary length and to compute 'cyclic' autocorrelation and cross-correlation functions. Complete listings of these programmes are given, and a code of 1019 bits is presented. (author)
Garai, Sisir Kumar
2011-07-20
Conversion of optical data from decimal to binary format is very important in optical computing and optical signal processing. There are many binary code systems to represent decimal numbers, the most common being the binary coded decimal (BCD) and gray code system. There are a wide choice of BCD codes, one of which is a natural BCD having a weighted code of 8421, by means of which it is possible to represent a decimal number from 0 to 9 with a combination of 4 bit binary digits. The reflected binary code, also known as the Gray code, is a binary numeral system where two successive values differ in only 1 bit. The Gray code is very important in digital optical communication as it is used to prevent spurious output from optical switches as well as to facilitate error correction in digital communications in an optical domain. Here in this communication, the author proposes an all-optical frequency encoded method of ":decimal to binary, BCD," "binary to gray," and "gray to binary" data conversion using the high-speed switching actions of semiconductor optical amplifiers. To convert decimal numbers to a binary form, a frequency encoding technique is adopted to represent two binary bits, 0 and 1. The frequency encoding technique offers advantages over conventional encoding techniques in terms of less probability of bit errors and greater reliability. Here the author has exploited the polarization switch made of a semiconductor optical amplifier (SOA) and a property of nonlinear rotation of the state of polarization of the probe beam in SOA for frequency conversion to develop the method of frequency encoded data conversion. © 2011 Optical Society of America
Simulations of linear and Hamming codes using SageMath
Timur, Tahta D.; Adzkiya, Dieky; Soleha
2018-03-01
Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.
Spread-spectrum communication using binary spatiotemporal chaotic codes
International Nuclear Information System (INIS)
Wang Xingang; Zhan Meng; Gong Xiaofeng; Lai, C.H.; Lai, Y.-C.
2005-01-01
We propose a scheme to generate binary code for baseband spread-spectrum communication by using a chain of coupled chaotic maps. We compare the performances of this type of spatiotemporal chaotic code with those of a conventional code used frequently in digital communication, the Gold code, and demonstrate that our code is comparable or even superior to the Gold code in several key aspects: security, bit error rate, code generation speed, and the number of possible code sequences. As the field of communicating with chaos faces doubts in terms of performance comparison with conventional digital communication schemes, our work gives a clear message that communicating with chaos can be advantageous and it deserves further attention from the nonlinear science community
Sparsity in Linear Predictive Coding of Speech
Giacobello, Daniele
2010-01-01
This thesis deals with developing improved techniques for speech coding based on the recent developments in sparse signal representation. In particular, this work is motivated by the need to address some of the limitations of the well- known linear prediction (LP) model currently applied in many modern speech coders. In the first part of the thesis, we provide an overview of Sparse Linear Predic- tion, a set of speech processing tools created by introducing sparsity constraints into the LP fr...
Improvements on binary coding using parallel computing
Fuentes, P A; Quintas, D G
2011-01-01
The error-correcting codes have many applications in fields related to communications. This paper tackles some partition algorithms to optimize the data encoding. These algorithms are based on sliding windows and allow for a parallel implementation. We analyse them and then expound a comparative study between the different partition methods that we propose.
Kinetics of clusters in a binary linear system
Hilhorst, H.J.
We consider the stochastically time-dependent behaviour of a binary linear chain of N units at temperature T and in an external field H. The kinetics is described in terms of clusters (sequences) of specified numbers of units in the same state. A coarse-grained master equation for the cluster
Linear distance coding for image classification.
Wang, Zilei; Feng, Jiashi; Yan, Shuicheng; Xi, Hongsheng
2013-02-01
The feature coding-pooling framework is shown to perform well in image classification tasks, because it can generate discriminative and robust image representations. The unavoidable information loss incurred by feature quantization in the coding process and the undesired dependence of pooling on the image spatial layout, however, may severely limit the classification. In this paper, we propose a linear distance coding (LDC) method to capture the discriminative information lost in traditional coding methods while simultaneously alleviating the dependence of pooling on the image spatial layout. The core of the LDC lies in transforming local features of an image into more discriminative distance vectors, where the robust image-to-class distance is employed. These distance vectors are further encoded into sparse codes to capture the salient features of the image. The LDC is theoretically and experimentally shown to be complementary to the traditional coding methods, and thus their combination can achieve higher classification accuracy. We demonstrate the effectiveness of LDC on six data sets, two of each of three types (specific object, scene, and general object), i.e., Flower 102 and PFID 61, Scene 15 and Indoor 67, Caltech 101 and Caltech 256. The results show that our method generally outperforms the traditional coding methods, and achieves or is comparable to the state-of-the-art performance on these data sets.
Code excited linear prediction codec for electrocardiogram.
Banik, Shubhadeep; Martis, Roshan; Nayak, Dayananda
2004-01-01
In this paper we propose a CELP ECG codec for medical telemetry. The encoding algorithm is based on CODE-EXCITED LINEAR PREDICTION (CELP). The general framework proposed is: QRS detection, calculation of LPC parameter, generation of residual error signal, codebook generation, MSE (mean square error) search. The codebook is generated for residual error. The indices of the codebook and corresponding LPC parameters are transmitted where the minimum MSE occurs. A replica of the transmitter codebook is present at the receiver. Corresponding to the received index value residual error coefficients are retrieved from the receiver codebook. The ECG signal is reconstructed from the retrieved code word.
Comparison Searching Process of Linear, Binary and Interpolation Algorithm
Rahim, Robbi; Nurarif, Saiful; Ramadhan, Mukhlis; Aisyah, Siti; Purba, Windania
2017-12-01
Searching is a process that cannot be issued for a transaction and communication process, many search algorithms that can be used to facilitate the search, linear, binary, and interpolation algorithms are some searching algorithms that can be utilized, the comparison of the three algorithms is performed by testing to search data with different length with pseudo process approach, and the result achieved that the interpolation algorithm is slightly faster than the other two algorithms.
Blind Recognition of Binary BCH Codes for Cognitive Radios
Directory of Open Access Journals (Sweden)
Jing Zhou
2016-01-01
Full Text Available A novel algorithm of blind recognition of Bose-Chaudhuri-Hocquenghem (BCH codes is proposed to solve the problem of Adaptive Coding and Modulation (ACM in cognitive radio systems. The recognition algorithm is based on soft decision situations. The code length is firstly estimated by comparing the Log-Likelihood Ratios (LLRs of the syndromes, which are obtained according to the minimum binary parity check matrixes of different primitive polynomials. After that, by comparing the LLRs of different minimum polynomials, the code roots and generator polynomial are reconstructed. When comparing with some previous approaches, our algorithm yields better performance even on very low Signal-Noise-Ratios (SNRs with lower calculation complexity. Simulation results show the efficiency of the proposed algorithm.
Sparsity in Linear Predictive Coding of Speech
DEFF Research Database (Denmark)
Giacobello, Daniele
of high-order sparse predictors. These predictors, by modeling efficiently the spectral envelope and the harmonics components with very few coefficients, have direct applications in speech processing, engendering a joint estimation of short-term and long-term predictors. We also give preliminary results...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...
Radiation preparation and application of the linear thermosensitive binary copolymers
Energy Technology Data Exchange (ETDEWEB)
Yi Min; Li Jun; Zhang Jianbo; Jiang Guilin; Qin Jianhua; Ha Hongfei
1998-06-01
Linear poly(NIPAAm-co-X), with X being AAc or 4-VP, was synthesized by means of {gamma} radiation induced polymerization in tetrahydrofuran (THF). The binary copolymers obtained are possessed of water-soluble, temperature sensitivity. It was also found that raising pH leads to a higher LCST when X is AAc. Prepared copolymers were used to concentrate metal ions, such as UO{sub 2}{sup 2+} RE{sup 3+} and Cr(VI) in dilute aqueous solution, which showed obvious concentration effect. The conditions of ion concentration were given and the mechanism was discussed preliminary.
Pal, Amrindra; Kumar, Santosh; Sharma, Sandeep
2017-05-01
Binary to octal and octal to binary code converter is a device that allows placing digital information from many inputs to many outputs. Any application of combinational logic circuit can be implemented by using external gates. In this paper, binary to octal and octal to binary code converter is proposed using electro-optic effect inside lithium-niobate based Mach-Zehnder interferometers (MZIs). The MZI structures have powerful capability to switching an optical input signal to a desired output port. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).
SYMBOL LEVEL DECODING FOR DUO-BINARY TURBO CODES
Directory of Open Access Journals (Sweden)
Yogesh Beeharry
2017-05-01
Full Text Available This paper investigates the performance of three different symbol level decoding algorithms for Duo-Binary Turbo codes. Explicit details of the computations involved in the three decoding techniques, and a computational complexity analysis are given. Simulation results with different couple lengths, code-rates, and QPSK modulation reveal that the symbol level decoding with bit-level information outperforms the symbol level decoding by 0.1 dB on average in the error floor region. Moreover, a complexity analysis reveals that symbol level decoding with bit-level information reduces the decoding complexity by 19.6 % in terms of the total number of computations required for each half-iteration as compared to symbol level decoding.
DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information
McCallister, Gary
2005-01-01
The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)
Lamellar-in-lamellar structure of binary linear multiblock copolymers
Klymko, T.; Subbotin, A.; ten Brinke, G.
2008-01-01
A theoretical description of the lamellar-in-lamellar self-assembly of binary A-b-(B-b-A)(m)-b-B-b-A multiblock copolymers in the strong segregation limit is presented. The essential difference between this binary multiblock system and the previously considered C-b-(B-b-A)(m)-b-B-b-C ternary
New linear codes from matrix-product codes with polynomial units
DEFF Research Database (Denmark)
Hernando, Fernando; Ruano Benito, Diego
2010-01-01
A new construction of codes from old ones is considered, it is an extension of the matrix-product construction. Several linear codes that improve the parameters of the known ones are presented.......A new construction of codes from old ones is considered, it is an extension of the matrix-product construction. Several linear codes that improve the parameters of the known ones are presented....
Riemann-Roch Spaces and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
We construct linear network codes utilizing algebraic curves over finite fields and certain associated Riemann-Roch spaces and present methods to obtain their parameters. In particular we treat the Hermitian curve and the curves associated with the Suzuki and Ree groups all having the maximal...... number of points for curves of their respective genera. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possibly altered vector space. Ralf Koetter and Frank R. Kschischang %\\cite{DBLP:journals/tit/KoetterK08} introduced...... in the above metric making them suitable for linear network coding....
International Nuclear Information System (INIS)
Matijevič, Gal; Prša, Andrej; Orosz, Jerome A.; Welsh, William F.; Bloemen, Steven; Barclay, Thomas
2012-01-01
We present an automated classification of 2165 Kepler eclipsing binary (EB) light curves that accompanied the second Kepler data release. The light curves are classified using locally linear embedding, a general nonlinear dimensionality reduction tool, into morphology types (detached, semi-detached, overcontact, ellipsoidal). The method, related to a more widely used principal component analysis, produces a lower-dimensional representation of the input data while preserving local geometry and, consequently, the similarity between neighboring data points. We use this property to reduce the dimensionality in a series of steps to a one-dimensional manifold and classify light curves with a single parameter that is a measure of 'detachedness' of the system. This fully automated classification correlates well with the manual determination of morphology from the data release, and also efficiently highlights any misclassified objects. Once a lower-dimensional projection space is defined, the classification of additional light curves runs in a negligible time and the method can therefore be used as a fully automated classifier in pipeline structures. The classifier forms a tier of the Kepler EB pipeline that pre-processes light curves for the artificial intelligence based parameter estimator.
On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes
DEFF Research Database (Denmark)
Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde
2013-01-01
a new application of the Wu list decoder by decoding irreducible binary Goppa codes up to the binary Johnson radius. Finally, we point out a connection between the governing equations of the Wu algorithm and the Guruswami–Sudan algorithm, immediately leading to equality in the decoding range...
Neural coding of binary mixtures in a structurally related odorant pair
Cruz, Georgina; Lowe, Graeme
2013-01-01
The encoding of odorant mixtures by olfactory sensory neurons depends on molecular interactions at peripheral receptors. However, the pharmacological basis of these interactions is not well defined. Both competitive and noncompetitive mechanisms of receptor binding and activation, or suppression, could contribute to coding. We studied this by analyzing responses of olfactory bulb glomeruli evoked by a pair of structurally related odorants, eugenol (EG) and methyl isoeugenol (MIEG). Fluorescence imaging in synaptopHluorin (spH) mice revealed that EG and MIEG evoked highly overlapped glomerular inputs, increasing the likelihood of mixture interactions. Glomerular responses to binary mixtures of EG and MIEG mostly showed hypoadditive interactions at intermediate and high odorant concentrations, with a few near threshold responses showing hyperadditivity. Dose-response profiles were well fitted by a model of two odorants competitively binding and activating a shared receptor linked to a non-linear transduction cascade. We saw no evidence of non-competitive mechanisms. PMID:23386975
Superlattice configurations in linear chain hydrocarbon binary mixtures
Indian Academy of Sciences (India)
Unknown
of n-C28H58 hydrocarbon, through an angle mθ, where m = 1, 2, 3 … and angle θ has an average value of. 3.3°. Supporting literature ... Keywords. Long-chain alkanes; binary mixtures; superlattices; discrete orientational changes. 1. Introduction ... tem and a model of superlattice configuration was proposed4, in terms of ...
Lamellar-in-lamellar structure of binary linear multiblock copolymers.
Klymko, T; Subbotin, A; Ten Brinke, G
2008-09-21
A theoretical description of the lamellar-in-lamellar self-assembly of binary A-b-(B-b-A)(m)-b-B-b-A multiblock copolymers in the strong segregation limit is presented. The essential difference between this binary multiblock system and the previously considered C-b-(B-b-A)(m)-b-B-b-C ternary multiblock copolymer system is discussed. Considering the situation with long end blocks, the free energy of the lamellar-in-lamellar self-assembled state is analyzed as a function of the number k of "thin" internal lamellar domains for different numbers m of repeating (B-b-A) units and different values of the Flory-Huggins chi(AB) interaction parameter. The theoretical predictions are in excellent agreement with the available experimental data.
Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai
2015-12-01
Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.
An assessment of estimation methods for generalized linear mixed models with binary outcomes.
Capanu, Marinela; Gönen, Mithat; Begg, Colin B
2013-11-20
Two main classes of methodology have been developed for addressing the analytical intractability of generalized linear mixed models: likelihood-based methods and Bayesian methods. Likelihood-based methods such as the penalized quasi-likelihood approach have been shown to produce biased estimates especially for binary clustered data with small clusters sizes. More recent methods using adaptive Gaussian quadrature perform well but can be overwhelmed by problems with large numbers of random effects, and efficient algorithms to better handle these situations have not yet been integrated in standard statistical packages. Bayesian methods, although they have good frequentist properties when the model is correct, are known to be computationally intensive and also require specialized code, limiting their use in practice. In this article, we introduce a modification of the hybrid approach of Capanu and Begg, 2011, Biometrics 67, 371-380, as a bridge between the likelihood-based and Bayesian approaches by employing Bayesian estimation for the variance components followed by Laplacian estimation for the regression coefficients. We investigate its performance as well as that of several likelihood-based methods in the setting of generalized linear mixed models with binary outcomes. We apply the methods to three datasets and conduct simulations to illustrate their properties. Simulation results indicate that for moderate to large numbers of observations per random effect, adaptive Gaussian quadrature and the Laplacian approximation are very accurate, with adaptive Gaussian quadrature preferable as the number of observations per random effect increases. The hybrid approach is overall similar to the Laplace method, and it can be superior for data with very sparse random effects. Copyright © 2013 John Wiley & Sons, Ltd.
Osculating Spaces of Varieties and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
We present a general theory to obtain good linear network codes utilizing the osculating nature of algebraic varieties. In particular, we obtain from the osculating spaces of Veronese varieties explicit families of equideminsional vector spaces, in which any pair of distinct vector spaces...... intersects in the same dimension. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The obtained osculating spaces of Veronese varieties are equidistant in the above metric. The parameters of the resulting linear network codes...
Osculating Spaces of Varieties and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
2013-01-01
We present a general theory to obtain good linear network codes utilizing the osculating nature of algebraic varieties. In particular, we obtain from the osculating spaces of Veronese varieties explicit families of equidimensional vector spaces, in which any pair of distinct vector spaces...... intersects in the same dimension. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The obtained osculating spaces of Veronese varieties are equidistant in the above metric. The parameters of the resulting linear network codes...
Katti, Rohan; Prince, Shanthi
2017-02-01
Optical 4-bit binary to gray and gray to binary code conversion has been demonstrated. The encoders are designed by implementing XOR gates in optical domain. Phase modulation in Mach-Zehnder interferometer has been exploited to achieve results at considerably high data rates of up to 60 Gbps. Also, the designs, being reversible in nature, will amount to less power consumption when embedded in an optical network. Performance parameters of the design, such as Q factor and extinction ratio, are analyzed based on simulation results carried out using comprehensive design suite OptiSystem-13.
Representation and processing of structures with binary sparse distributed codes
Rachkovskij, Dmitri A.
1999-01-01
The schemes for compositional distributed representations include those allowing on-the-fly construction of fixed dimensionality codevectors to encode structures of various complexity. Similarity of such codevectors takes into account both structural and semantic similarity of represented structures. In this paper we provide a comparative description of sparse binary distributed representation developed in the frames of the Associative-Projective Neural Network architecture and more well-know...
Peer-Assisted Content Distribution with Random Linear Network Coding
DEFF Research Database (Denmark)
Hundebøll, Martin; Ledet-Pedersen, Jeppe; Sluyterman, Georg
2014-01-01
-to-peer system, which applies random linear network coding. We focus on an experimental evaluation of the performance on 36 real nodes. The evalution shows that BRONCO outperforms regular HTTP transfers, and, with a extremely simple protocol structure, performs equivalently to bittorrent distribution...
Directory of Open Access Journals (Sweden)
Yimeng Zhang
2013-05-01
Full Text Available A method of blind recognition of the coding parameters for binary Bose-Chaudhuri-Hocquenghem (BCH codes is proposed in this paper. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary BCH codes, while the coding parameters are unknown. The problem can be addressed on the context of the non-cooperative communications or adaptive coding and modulations (ACM for cognitive radio networks. The recognition processing includes two major procedures: code length estimation and generator polynomial reconstruction. A hard decision method has been proposed in a previous literature. In this paper we propose the recognition approach in soft decision situations with Binary-Phase-Shift-Key modulations and Additive-White-Gaussian-Noise (AWGN channels. The code length is estimated by maximizing the root information dispersion entropy function. And then we search for the code roots to reconstruct the primitive and generator polynomials. By utilizing the soft output of the channel, the recognition performance is improved and the simulations show the efficiency of the proposed algorithm.
Linear and nonlinear verification of gyrokinetic microstability codes
Bravenec, R. V.; Candy, J.; Barnes, M.; Holland, C.
2011-12-01
Verification of nonlinear microstability codes is a necessary step before comparisons or predictions of turbulent transport in toroidal devices can be justified. By verification we mean demonstrating that a code correctly solves the mathematical model upon which it is based. Some degree of verification can be accomplished indirectly from analytical instability threshold conditions, nonlinear saturation estimates, etc., for relatively simple plasmas. However, verification for experimentally relevant plasma conditions and physics is beyond the realm of analytical treatment and must rely on code-to-code comparisons, i.e., benchmarking. The premise is that the codes are verified for a given problem or set of parameters if they all agree within a specified tolerance. True verification requires comparisons for a number of plasma conditions, e.g., different devices, discharges, times, and radii. Running the codes and keeping track of linear and nonlinear inputs and results for all conditions could be prohibitive unless there was some degree of automation. We have written software to do just this and have formulated a metric for assessing agreement of nonlinear simulations. We present comparisons, both linear and nonlinear, between the gyrokinetic codes GYRO [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and GS2 [W. Dorland, F. Jenko, M. Kotschenreuther, and B. N. Rogers, Phys. Rev. Lett. 85, 5579 (2000)]. We do so at the mid-radius for the same discharge as in earlier work [C. Holland, A. E. White, G. R. McKee, M. W. Shafer, J. Candy, R. E. Waltz, L. Schmitz, and G. R. Tynan, Phys. Plasmas 16, 052301 (2009)]. The comparisons include electromagnetic fluctuations, passing and trapped electrons, plasma shaping, one kinetic impurity, and finite Debye-length effects. Results neglecting and including electron collisions (Lorentz model) are presented. We find that the linear frequencies with or without collisions agree well between codes, as do the time averages of
Estimating the Eutectic Composition of Simple Binary Alloy System Using Linear Geometry
Directory of Open Access Journals (Sweden)
Muhammed Olawale Hakeem AMUDA
2008-06-01
Full Text Available A simple linear equation was developed and applied to a hypothetical binary equilibrium diagram to evaluate the eutectic composition of the binary alloy system. Solution of the equations revealed that the eutectic composition of the case study Pb – Sn, Bi – Cd and Al – Si alloys are 39.89% Pb, 60.11% Sn, 58.01% Bi, 41.99% Cd and 90.94% Al, 9.06% Si respectively. These values are very close to experimental values. The percent deviation of analytical values from experimental values ranged between 2.87 and 5% for the three binary systems considered, except for Si – Al alloy in which the percent deviation for the silicon element was 22%.It is concluded that equation of straight line could be used to predict the eutectic composition of simple binary alloys within tolerable experimental deviation range of 2.5%.
Riemann-Roch Spaces and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
a metric on the set of vector spaces and showed that a minimal distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The vector spaces in our construction have minimal distance bounded from below......We construct linear network codes utilizing algebraic curves over finite fields and certain associated Riemann-Roch spaces and present methods to obtain their parameters. In particular we treat the Hermitian curve and the curves associated with the Suzuki and Ree groups all having the maximal...... number of points for curves of their respective genera. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possibly altered vector space. Ralf Koetter and Frank R. Kschischang %\\cite{DBLP:journals/tit/KoetterK08} introduced...
Deep Learning Methods for Improved Decoding of Linear Codes
Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair
2018-02-01
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission
Directory of Open Access Journals (Sweden)
Tarek Chehade
2015-01-01
Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.
Further results on binary convolutional codes with an optimum distance profile
DEFF Research Database (Denmark)
Johannesson, Rolf; Paaske, Erik
1978-01-01
Fixed binary convolutional codes are considered which are simultaneously optimal or near-optimal according to three criteria: namely, distance profiled, free distanced_{ infty}, and minimum number of weightd_{infty}paths. It is shown how the optimum distance profile criterion can be used to limit...
A Tough Call : Mitigating Advanced Code-Reuse Attacks at the Binary Level
Veen, Victor Van Der; Goktas, Enes; Contag, Moritz; Pawoloski, Andre; Chen, Xi; Rawat, Sanjay; Bos, Herbert; Holz, Thorsten; Athanasopoulos, Ilias; Giuffrida, Cristiano
2016-01-01
Current binary-level Control-Flow Integrity (CFI) techniques are weak in determining the set of valid targets for indirect control flow transfers on the forward edge. In particular, the lack of source code forces existing techniques to resort to a conservative address-taken policy that
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
An efficient algorithm for encoding and decoding of raptor codes over the binary erasure channel
Zhang, Ya-Hang; Cheng, Bo-Wen; Zou, Guang-Nan; Wen, Wei-Ping; Qing, Si-Han
2009-12-01
As the most advanced rateless fountain codes, Systematic Raptor codes has been adopted by the 3GPP standard as a forward error correction scheme in Multimedia Broadcast/Multicast Services (MBMS). It has been shown to be an efficient channel coding technique which guarantees high symbol diversity in overlay networks. The 3GPP standard outlined a time-efficient maximum-likelihood (ML) decoding scheme that can be implemented using Gaussian elimination. But when the number of encoding symbols grows large, Gaussian elimination need to deal with a large matrix with O (K3) binary arithmetic operations, so the larger K becomes, the worse ML decoding scheme performs. This paper presents a better time-efficient encoding and decoding scheme while maintaining the same symbol recoverable performance, this encoding and decoding scheme is named Rapid Raptor Code. It will be shown that the proposed Rapid Raptor code Scheme significantly improves traditional Raptor codes' efficiency while maintaining the same performance.
Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R
2008-08-01
The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.
High-affinity single-domain binding proteins with a binary-code interface.
Koide, Akiko; Gilbreth, Ryan N; Esaki, Kaori; Tereshko, Valentina; Koide, Shohei
2007-04-17
High degrees of sequence and conformation complexity found in natural protein interaction interfaces are generally considered essential for achieving tight and specific interactions. However, it has been demonstrated that specific antibodies can be built by using an interface with a binary code consisting of only Tyr and Ser. This surprising result might be attributed to yet undefined properties of the antibody scaffold that uniquely enhance its capacity for target binding. In this work we tested the generality of the binary-code interface by engineering binding proteins based on a single-domain scaffold. We show that Tyr/Ser binary-code interfaces consisting of only 15-20 positions within a fibronectin type III domain (FN3; 95 residues) are capable of producing specific binding proteins (termed "monobodies") with a low-nanomolar K(d). A 2.35-A x-ray crystal structure of a monobody in complex with its target, maltose-binding protein, and mutation analysis revealed dominant contributions of Tyr residues to binding as well as striking molecular mimicry of a maltose-binding protein substrate, beta-cyclodextrin, by the Tyr/Ser binary interface. This work suggests that an interaction interface with low chemical diversity but with significant conformational diversity is generally sufficient for tight and specific molecular recognition, providing fundamental insights into factors governing protein-protein interactions.
Random linear network coding for streams with unequally sized packets
DEFF Research Database (Denmark)
Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk
2016-01-01
State of the art Random Linear Network Coding (RLNC) schemes assume that data streams generate packets with equal sizes. This is an assumption that results in the highest efficiency gains for RLNC. A typical solution for managing unequal packet sizes is to zero-pad the smallest packets. However...... and decoding designs, focused on processing macro-symbols (composed by a number of symbols in the appropriate finite field) instead of full zero-padded packets. Our proposed schemes provide on-the-fly decoding strategies to manage heterogeneous packet sizes without the need for fragmentation or bundling...
Latency Performance of Encoding with Random Linear Network Coding
DEFF Research Database (Denmark)
Nielsen, Lars; Hansen, René Rydhof; Lucani Rötter, Daniel Enrique
2018-01-01
In this paper, we present a performance study of the impact of generation and symbol sizes on latency for encoding with Random Linear Network Coding (RLNC). This analysis is important for low latency applications of RLNC as well as data storage applications that use large blocks of data, where...... the encoding process can be parallelized based on system requirements to reduce data access time within the system. Using a counting argument, we focus on predicting the effect of changes of generation (number of original packets) and symbol size (number of bytes per data packet) configurations on the encoding...
Fréchet Metric for Space of Binary Coded Software
Directory of Open Access Journals (Sweden)
Masárová Renáta
2014-12-01
Full Text Available As stated in (7, binary coded computer programs can be shown as a metric space. Therefore, they can be measured by metric in a sense of metric space theory. This paper presents the proof that Fréchet metric is a metric on the space of all sequences of elements M={0,1t} Therefore, it is usable to build a system of software metrics based on the metric space theory
The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer
Kochoska, Angela; Prša, Andrej; Horvat, Martin
2018-01-01
Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and
Construction of Fixed Rate Non-Binary WOM Codes Based on Integer Programming
Fujino, Yoju; Wadayama, Tadashi
In this paper, we propose a construction of non-binary WOM (Write-Once-Memory) codes for WOM storages such as flash memories. The WOM codes discussed in this paper are fixed rate WOM codes where messages in a fixed alphabet of size $M$ can be sequentially written in the WOM storage at least $t^*$-times. In this paper, a WOM storage is modeled by a state transition graph. The proposed construction has the following two features. First, it includes a systematic method to determine the encoding regions in the state transition graph. Second, the proposed construction includes a labeling method for states by using integer programming. Several novel WOM codes for $q$ level flash memories with 2 cells are constructed by the proposed construction. They achieve the worst numbers of writes $t^*$ that meet the known upper bound in many cases. In addition, we constructed fixed rate non-binary WOM codes with the capability to reduce ICI (inter cell interference) of flash cells. One of the advantages of the proposed construction is its flexibility. It can be applied to various storage devices, to various dimensions (i.e, number of cells), and various kind of additional constraints.
Random Linear Network Coding for 5G Mobile Video Delivery
Directory of Open Access Journals (Sweden)
Dejan Vukobratovic
2018-03-01
Full Text Available An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G 3GPP New Radio (NR standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC. In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research.
Weighted locality-constrained linear coding for lesion classification in CT images.
Yuan, Yixuan; Hoogi, Assaf; Beaulieu, Christopher F; Meng, Max Q-H; Rubin, Daniel L
2015-01-01
Computed tomography is a popular imaging modality for detecting abnormalities associated with abdominal organs such as the liver, kidney and uterus. In this paper, we propose a novel weighted locality-constrained linear coding (LLC) method followed by a weighted max-pooling method to classify liver lesions into three classes: cysts, metastases, hemangiomas. We first divide the lesions into same-size patches. Then, we extract the raw features in all patches followed by Principal Components Analysis (PCA) and apply K means to obtain a single LLC dictionary. Since the interior lesion patches and the boundary patches contribute different information in the image, we assign different weights on these two types of patches to obtain the LLC codes. Moreover, a weighted max pooling approach is also proposed to further evaluate the importance of these two types of patches in feature pooling. Experiments on 109 images of liver lesions were carried out to validate the proposed method. The proposed method achieves a best lesion classification accuracy of 96.33%, which appears to be superior compared with traditional image coding methods: LLC method and Bag-of-words method (BoW) and traditional features: Local Binary Pattern (LBP) features, uniform LBP and complete LBP, demonstrating that the proposed method provides better classification.
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Di G. Sigalotti, L.; Klapp, J.
1997-03-01
A new second-order Eulerian code is compared with a version of the TREESPH code formulated by Hernquist & Katz (1989ApJS...70..419H) for the standard isothermal collapse test. The results indicate that both codes produce a very similar evolution ending with the formation of a protostellar binary system. Contrary to previous first-order calculations, the binary forms by direct fragmentation, i.e., without the occurrence of an intermediate bar configuration. A similar trend was also found in recent second-order Eulerian calculations (Myhill & Boss 1993ApJS...89..345M), suggesting that it is a result of the decreased numerical diffusion associated with the new second-order schemes. The results have also implications on the differences between the finite difference methods and the particle method SPH, raised by Monaghan & Lattanzio (1986A&A...158..207M) for this problem. In particular, the Eulerian calculation does not result in a run-away collapse of the fragments, and as found in the TREESPH evolution, they also show a clear tendency to get closer together. In agreement with previous SPH calculations (Monaghan & Lattanzio 1986A&A...158..207M), the results of the long term evolution with code TREESPH show that the gravitational interaction between the two fragments may become important, and eventually induce the binary to coalesce. However, most recent SPH calculations (Bate, Bonnell & Price 1995MNRAS.277..362B ) indicate that the two fragments, after having reached a minimum separation distance, do not merge but continue to orbit each other.
Binary Large Object-Based Approach for QR Code Detection in Uncontrolled Environments
Directory of Open Access Journals (Sweden)
Omar Lopez-Rincon
2017-01-01
Full Text Available Quick Response QR barcode detection in nonarbitrary environment is still a challenging task despite many existing applications for finding 2D symbols. The main disadvantage of recent applications for QR code detection is a low performance for rotated and distorted single or multiple symbols in images with variable illumination and presence of noise. In this paper, a particular solution for QR code detection in uncontrolled environments is presented. The proposal consists in recognizing geometrical features of QR code using a binary large object- (BLOB- based algorithm with subsequent iterative filtering QR symbol position detection patterns that do not require complex processing and training of classifiers frequently used for these purposes. The high precision and speed are achieved by adaptive threshold binarization of integral images. In contrast to well-known scanners, which fail to detect QR code with medium to strong blurring, significant nonuniform illumination, considerable symbol deformations, and noising, the proposed technique provides high recognition rate of 80%–100% with a speed compatible to real-time applications. In particular, speed varies from 200 ms to 800 ms per single or multiple QR code detected simultaneously in images with resolution from 640 × 480 to 4080 × 2720, respectively.
Short binary convolutional codes with maximal free distance for rates 2/3 and 3/4
DEFF Research Database (Denmark)
Paaske, Erik
1974-01-01
A search procedure is developed to find good short binary(N,N - 1)convolutional codes. It uses simple rules to discard from the complete ensemble of codes a large fraction whose free distanced_{free}either cannot achieve the maximum value or is equal tod_{free}of some code in the remaining set....... Farther, the search among the remaining codes is started in a subset in which we expect the possibility of finding codes with large values ofd_{free}to be good. A number of short, optimum (in the sense of maximizingd_{free}), rate-2/3 and 3/4 codes found by the search procedure are listed....
Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery
Directory of Open Access Journals (Sweden)
Fan Hu
2016-06-01
Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.
Hariharan, M; Sindhu, R; Vijean, Vikneswaran; Yazid, Haniza; Nadarajaw, Thiyagar; Yaacob, Sazali; Polat, Kemal
2018-03-01
Infant cry signal carries several levels of information about the reason for crying (hunger, pain, sleepiness and discomfort) or the pathological status (asphyxia, deaf, jaundice, premature condition and autism, etc.) of an infant and therefore suited for early diagnosis. In this work, combination of wavelet packet based features and Improved Binary Dragonfly Optimization based feature selection method was proposed to classify the different types of infant cry signals. Cry signals from 2 different databases were utilized. First database contains 507 cry samples of normal (N), 340 cry samples of asphyxia (A), 879 cry samples of deaf (D), 350 cry samples of hungry (H) and 192 cry samples of pain (P). Second database contains 513 cry samples of jaundice (J), 531 samples of premature (Prem) and 45 samples of normal (N). Wavelet packet transform based energy and non-linear entropies (496 features), Linear Predictive Coding (LPC) based cepstral features (56 features), Mel-frequency Cepstral Coefficients (MFCCs) were extracted (16 features). The combined feature set consists of 568 features. To overcome the curse of dimensionality issue, improved binary dragonfly optimization algorithm (IBDFO) was proposed to select the most salient attributes or features. Finally, Extreme Learning Machine (ELM) kernel classifier was used to classify the different types of infant cry signals using all the features and highly informative features as well. Several experiments of two-class and multi-class classification of cry signals were conducted. In binary or two-class experiments, maximum accuracy of 90.18% for H Vs P, 100% for A Vs N, 100% for D Vs N and 97.61% J Vs Prem was achieved using the features selected (only 204 features out of 568) by IBDFO. For the classification of multiple cry signals (multi-class problem), the selected features could differentiate between three classes (N, A & D) with the accuracy of 100% and seven classes with the accuracy of 97.62%. The experimental
International Nuclear Information System (INIS)
Vallee, R.L.
1968-01-01
The study of binary groups under their mathematical aspects constitutes the matter of binary analysis, the purpose of which consists in developing altogether simple, rigorous and practical methods needed by the technicians, the engineers and all those who may be mainly concerned by digital processing. This subject, fast extending if not determining, however tends actually to play a main part in nuclear electronics as well as in several other research areas. (authors) [fr
Linear tree codes and the problem of explicit constructions
Czech Academy of Sciences Publication Activity Database
Pudlák, Pavel
2016-01-01
Roč. 490, February 1 (2016), s. 124-144 ISSN 0024-3795 R&D Projects: GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : tree code * error correcting code * triangular totally nonsingular matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016 http://www.sciencedirect.com/science/article/pii/S002437951500645X
SPORTS - a simple non-linear thermalhydraulic stability code
International Nuclear Information System (INIS)
Chatoorgoon, V.
1986-01-01
A simple code, called SPORTS, has been developed for two-phase stability studies. A novel method of solution of the finite difference equations was deviced and incorporated, and many of the approximations that are common in other stability codes are avoided. SPORTS is believed to be accurate and efficient, as small and large time-steps are permitted, and hence suitable for micro-computers. (orig.)
FFT Algorithm for Binary Extension Finite Fields and Its Application to Reed–Solomon Codes
Lin, Sian Jheng
2016-08-15
Recently, a new polynomial basis over binary extension fields was proposed, such that the fast Fourier transform (FFT) over such fields can be computed in the complexity of order O(n lg(n)), where n is the number of points evaluated in FFT. In this paper, we reformulate this FFT algorithm, such that it can be easier understood and be extended to develop frequency-domain decoding algorithms for (n = 2(m), k) systematic Reed-Solomon (RS) codes over F-2m, m is an element of Z(+), with n-k a power of two. First, the basis of syndrome polynomials is reformulated in the decoding procedure so that the new transforms can be applied to the decoding procedure. A fast extended Euclidean algorithm is developed to determine the error locator polynomial. The computational complexity of the proposed decoding algorithm is O(n lg(n-k)+(n-k)lg(2)(n-k)), improving upon the best currently available decoding complexity O(n lg(2)(n) lg lg(n)), and reaching the best known complexity bound that was established by Justesen in 1976. However, Justesen\\'s approach is only for the codes over some specific fields, which can apply Cooley-Tukey FFTs. As revealed by the computer simulations, the proposed decoding algorithm is 50 times faster than the conventional one for the (2(16), 2(15)) RS code over F-216.
Adaptation of Zerotrees Using Signed Binary Digit Representations for 3D Image Coding
Directory of Open Access Journals (Sweden)
Mailhes Corinne
2007-01-01
Full Text Available Zerotrees of wavelet coefficients have shown a good adaptability for the compression of three-dimensional images. EZW, the original algorithm using zerotree, shows good performance and was successfully adapted to 3D image compression. This paper focuses on the adaptation of EZW for the compression of hyperspectral images. The subordinate pass is suppressed to remove the necessity to keep the significant pixels in memory. To compensate the loss due to this removal, signed binary digit representations are used to increase the efficiency of zerotrees. Contextual arithmetic coding with very limited contexts is also used. Finally, we show that this simplified version of 3D-EZW performs almost as well as the original one.
Iterative solution of linear equations in ODE codes. [Krylov subspaces
Energy Technology Data Exchange (ETDEWEB)
Gear, C. W.; Saad, Y.
1981-01-01
Each integration step of a stiff equation involves the solution of a nonlinear equation, usually by a quasi-Newton method that leads to a set of linear problems. Iterative methods for these linear equations are studied. Of particular interest are methods that do not require an explicit Jacobian, but can work directly with differences of function values using J congruent to f(x + delta) - f(x). Some numerical experiments using a modification of LSODE are reported. 1 figure, 2 tables.
Development of non-linear vibration analysis code for CANDU fuelling machine
International Nuclear Information System (INIS)
Murakami, Hajime; Hirai, Takeshi; Horikoshi, Kiyomi; Mizukoshi, Kaoru; Takenaka, Yasuo; Suzuki, Norio.
1988-01-01
This paper describes the development of a non-linear, dynamic analysis code for the CANDU 600 fuelling machine (F-M), which includes a number of non-linearities such as gap with or without Coulomb friction, special multi-linear spring connections, etc. The capabilities and features of the code and the mathematical treatment for the non-linearities are explained. The modeling and numerical methodology for the non-linearities employed in the code are verified experimentally. Finally, the simulation analyses for the full-scale F-M vibration testing are carried out, and the applicability of the code to such multi-degree of freedom systems as F-M is demonstrated. (author)
Displaced dynamics of binary mixtures in linear and nonlinear optical lattices
Sekh, Golam Ali; Salerno, Mario; Saha, Aparna; Talukdar, Benoy
2012-02-01
The dynamical behavior of matter-wave solitons of two-component Bose-Einstein condensates (BEC) in combined linear and nonlinear optical lattices (OLs) is investigated. In particular, the dependence of the frequency of the oscillating dynamics resulting from initially slightly displaced components is investigated both analytically, by means of a variational effective potential approach for the reduced collective coordinate dynamics of the soliton, and numerically, by direct integrations of the mean field equations of the BEC mixture. We show that for small initial displacements binary solitons can be viewed as point masses connected by elastic springs of strengths related to the amplitude of the OL and to the intra- and interspecies interactions. Analytical expressions of symmetric and antisymmetric mode frequencies are derived and occurrence of beatings phenomena in the displaced dynamics is predicted. These expressions are shown to give a very good estimation of the oscillation frequencies for different values of the intraspecies interatomic scattering length, as confirmed by direct numerical integrations of the mean field Gross-Pitaevskii equations (GPE) of the mixture. The possibility to use displaced dynamics for indirect measurements of BEC mixture characteristics such as number of atoms and interatomic interactions is also suggested.
International Nuclear Information System (INIS)
Nersisyan, H.B.; Zwicknagel, G.; Toepffer, C.
2003-01-01
The energy loss of a heavy ion moving in a magnetized electron plasma is considered within the linear response (LR) and binary collision (BC) treatments with the purpose to look for a connection between these two models. These two complementary approaches yield close results if no magnetic field is present, but there develop discrepancies with growing magnetic field at ion velocities that are lower than, or comparable with, the thermal velocity of the electrons. We show that this is a peculiarity of the Coulomb interaction which requires cutoff procedures to account for its singularity at the origin and its infinite range. The cutoff procedures in the LR and BC treatments are different as the order of integrations in velocity and in ordinary (Fourier) spaces is reversed in both treatments. While BC involves a velocity average of Coulomb logarithms, there appear in LR Coulomb logarithms of velocity averaged cutoffs. The discrepancies between LR and BC vanish, except for small contributions of collective modes, for smoothened potentials that require no cutoffs. This is shown explicitly with the help of an improved BC in which the velocity transfer is treated up to second order in the interaction in Fourier space
Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model
DEFF Research Database (Denmark)
Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico
Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1))....
Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks
DEFF Research Database (Denmark)
Heide, J; Zhang, Qi; Fitzek, F H P
2013-01-01
This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and
Learning binary code via PCA of angle projection for image retrieval
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Energy Technology Data Exchange (ETDEWEB)
Bosler, G.E.; O' Dell, R.D.; Resnik, W.M.
1976-03-01
The LASIP-III code was developed for processing Version III standard interface data files which have been specified by the Committee on Computer Code Coordination. This processor performs two distinct tasks, namely, transforming free-field format, BCD data into well-defined binary files and providing for printing and punching data in the binary files. While LASIP-III is exported as a complete free-standing code package, techniques are described for easily separating the processor into two modules, viz., one for creating the binary files and one for printing the files. The two modules can be separated into free-standing codes or they can be incorporated into other codes. Also, the LASIP-III code can be easily expanded for processing additional files, and procedures are described for such an expansion. 2 figures, 8 tables.
Proceedings of the conference on computer codes and the linear accelerator community
International Nuclear Information System (INIS)
Cooper, R.K.
1990-07-01
The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned
Proceedings of the conference on computer codes and the linear accelerator community
Energy Technology Data Exchange (ETDEWEB)
Cooper, R.K. (comp.)
1990-07-01
The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.
Directory of Open Access Journals (Sweden)
A. A. Kovylin
2013-01-01
Full Text Available The article describes the problem of searching for binary pseudo-random sequences with quasi-ideal autocorrelation function, which are to be used in contemporary communication systems, including mobile and wireless data transfer interfaces. In the synthesis of binary sequences sets, the target set is manning them based on the minimax criterion by which a sequence is considered to be optimal according to the intended application. In the course of the research the optimal sequences with order of up to 52 were obtained; the analysis of Run Length Encoding was carried out. The analysis showed regularities in the distribution of series number of different lengths in the codes that are optimal on the chosen criteria, which would make it possible to optimize the searching process for such codes in the future.
Abdullah, Ade Rani
2016-01-01
Technology has an important role in dispatch information. Compression has a purpose to diminish the measurement of the data becomes smaller from the authentic data. Even-Rodeh Code and Variable Length Binary Encoding (VLBE) Algorithm are the kinds of lossless compression that used in this research, it will be measured the performance by Compression Ratio (CR), Ratio of Compression (RC), Redundancy (RD), Time of Compression (millisecond) and Time of Decompression (millisecond). ...
Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.
Energy Technology Data Exchange (ETDEWEB)
Preston, Leiph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-08-01
Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti) by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.
STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM
Directory of Open Access Journals (Sweden)
H. Prashantha Kumar
2012-03-01
Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.
DEFF Research Database (Denmark)
Yu, Xianbin; Gibbon, Timothy Braidwood; Tafur Monroy, Idelfonso
2009-01-01
In this letter, an all-optical incoherent scheme for generation of binary phase-coded ultra-wideband (UWB) impulse radio signals is proposed. The generated UWB pulses utilize relaxation oscillations of an optically injected distributed feedback laser that are binary phase encoded (0 and ) and meet...
Directory of Open Access Journals (Sweden)
Eric Z. Chen
2015-01-01
Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.
Binary sequence detector uses minimum number of decision elements
Perlman, M.
1966-01-01
Detector of an n bit binary sequence code within a serial binary data system assigns states to memory elements of a code sequence detector by employing the same order of states for the sequence detector as that of the sequence generator when the linear recursion relationship employed by the sequence generator is given.
A new approach of binary addition and subtraction by non-linear ...
Indian Academy of Sciences (India)
All-optical parallel computation uses the parallelism of optics with all its possibili- ties to overcome the limitations and restrictions for arithmetic and logic operations in optical domain. Here, the authors propose a new technique of binary addition and subtraction scheme by proper all-optical switching system. This technique ...
Berdyugin, A.; Piirola, V.; Sakanoi, T.; Kagitani, M.; Yoneda, M.
2018-03-01
Aim. To study the binary geometry of the classic Algol-type triple system λ Tau, we have searched for polarization variations over the orbital cycle of the inner semi-detached binary, arising from light scattering in the circumstellar material formed from ongoing mass transfer. Phase-locked polarization curves provide an independent estimate for the inclination i, orientation Ω, and the direction of the rotation for the inner orbit. Methods: Linear polarization measurements of λ Tau in the B, V , and R passbands with the high-precision Dipol-2 polarimeter have been carried out. The data have been obtained on the 60 cm KVA (Observatory Roque de los Muchachos, La Palma, Spain) and Tohoku 60 cm (Haleakala, Hawaii, USA) remotely controlled telescopes over 69 observing nights. Analytic and numerical modelling codes are used to interpret the data. Results: Optical polarimetry revealed small intrinsic polarization in λ Tau with 0.05% peak-to-peak variation over the orbital period of 3.95 d. The variability pattern is typical for binary systems showing strong second harmonic of the orbital period. We apply a standard analytical method and our own light scattering models to derive parameters of the inner binary orbit from the fit to the observed variability of the normalized Stokes parameters. From the analytical method, the average for three passband values of orbit inclination i = 76° + 1°/-2° and orientation Ω = 15°(195°) ± 2° are obtained. Scattering models give similar inclination values i = 72-76° and orbit orientation ranging from Ω = 16°(196°) to Ω = 19°(199°), depending on the geometry of the scattering cloud. The rotation of the inner system, as seen on the plane of the sky, is clockwise. We have found that with the scattering model the best fit is obtained for the scattering cloud located between the primary and the secondary, near the inner Lagrangian point or along the Roche lobe surface of the secondary facing the primary. The inclination i
I-Ching, dyadic groups of binary numbers and the geno-logic coding in living bodies.
Hu, Zhengbing; Petoukhov, Sergey V; Petukhova, Elena S
2017-12-01
The ancient Chinese book I-Ching was written a few thousand years ago. It introduces the system of symbols Yin and Yang (equivalents of 0 and 1). It had a powerful impact on culture, medicine and science of ancient China and several other countries. From the modern standpoint, I-Ching declares the importance of dyadic groups of binary numbers for the Nature. The system of I-Ching is represented by the tables with dyadic groups of 4 bigrams, 8 trigrams and 64 hexagrams, which were declared as fundamental archetypes of the Nature. The ancient Chinese did not know about the genetic code of protein sequences of amino acids but this code is organized in accordance with the I-Ching: in particularly, the genetic code is constructed on DNA molecules using 4 nitrogenous bases, 16 doublets, and 64 triplets. The article also describes the usage of dyadic groups as a foundation of the bio-mathematical doctrine of the geno-logic code, which exists in parallel with the known genetic code of amino acids but serves for a different goal: to code the inherited algorithmic processes using the logical holography and the spectral logic of systems of genetic Boolean functions. Some relations of this doctrine with the I-Ching are discussed. In addition, the ratios of musical harmony that can be revealed in the parameters of DNA structure are also represented in the I-Ching book. Copyright © 2017 Elsevier Ltd. All rights reserved.
Linear calculations of edge current driven kink modes with BOUT++ code
Li, G. Q.; Xu, X. Q.; Snyder, P. B.; Turnbull, A. D.; Xia, T. Y.; Ma, C. H.; Xi, P. W.
2014-10-01
This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linear growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density.
Linear calculations of edge current driven kink modes with BOUT++ code
Energy Technology Data Exchange (ETDEWEB)
Li, G. Q., E-mail: ligq@ipp.ac.cn; Xia, T. Y. [Institute of Plasma Physics, CAS, Hefei, Anhui 230031 (China); Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Xu, X. Q. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Snyder, P. B.; Turnbull, A. D. [General Atomics, San Diego, California 92186 (United States); Ma, C. H.; Xi, P. W. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); FSC, School of Physics, Peking University, Beijing 100871 (China)
2014-10-15
This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linear growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density.
Solving linear systems in FLICA-4, thermohydraulic code for 3-D transient computations
International Nuclear Information System (INIS)
Allaire, G.
1995-01-01
FLICA-4 is a computer code, developed at the CEA (France), devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores, for small size problems (around 100 mesh cells) as well as for large ones (more than 100000), on, either standard workstations or vector super-computers. As for time implicit codes, the largest time and memory consuming part of FLICA-4 is the routine dedicated to solve the linear system (the size of which is of the order of the number of cells). Therefore, the efficiency of the code is crucially influenced by the optimization of the algorithms used in assembling and solving linear systems: direct methods as the Gauss (or LU) decomposition for moderate size problems, iterative methods as the preconditioned conjugate gradient for large problems. 6 figs., 13 refs
Non-linearity parameter of binary liquid mixtures at elevated pressures
Indian Academy of Sciences (India)
In the present investigation an attempt in this field has been made and B/A is calculated for four binary liquid mixtures at T = 303.15 K over a wide range of pressure. (ranging from 0.1 to 80 Mpa). In this context, Tong and Dong equation has been used to calculate the B/A values. Scarcity of experimental data and its variation ...
ON-SKY DEMONSTRATION OF A LINEAR BAND-LIMITED MASK WITH APPLICATION TO VISUAL BINARY STARS
International Nuclear Information System (INIS)
Crepp, J.; Ge, J.; Kravchenko, I.; Serabyn, E.; Carson, J.
2010-01-01
We have designed and built the first band-limited coronagraphic mask used for ground-based high-contrast imaging observations. The mask resides in the focal plane of the near-infrared camera PHARO at the Palomar Hale telescope and receives a well-corrected beam from an extreme adaptive optics system. Its performance on-sky with single stars is comparable to current state-of-the-art instruments: contrast levels of ∼10 -5 or better at 0.''8 in K s after post-processing, depending on how well non-common-path errors are calibrated. However, given the mask's linear geometry, we are able to conduct additional unique science observations. Since the mask does not suffer from pointing errors down its long axis, it can suppress the light from two different stars simultaneously, such as the individual components of a spatially resolved binary star system, and search for faint tertiary companions. In this paper, we present the design of the mask, the science motivation for targeting binary stars, and our preliminary results, including the detection of a candidate M-dwarf tertiary companion orbiting the visual binary star HIP 48337, which we are continuing to monitor with astrometry to determine its association.
Context based Coding of Binary Shapes by Object Boundary Straightness Analysis
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren
2004-01-01
A new lossless compression scheme for bilevel images targeted at binary shapes of image and video objects is presented. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used in the context definition for arithmetic encoding...
DEFF Research Database (Denmark)
Christensen, M. G.; Jensen, Søren Holdt
2006-01-01
A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...
Non-linearity parameter of binary liquid mixtures at elevated pressures
Indian Academy of Sciences (India)
Abstract. When sound waves of high amplitude propagate, several non-linear effects occur. Ultra- sonic studies in liquid mixtures provide valuable information about structure and interaction in such systems. The present investigation comprises of theoretical evaluation of the acoustic non-linearity parameter B/A of four ...
The Generalized Logit-Linear Item Response Model for Binary-Designed Items
Revuelta, Javier
2008-01-01
This paper introduces the generalized logit-linear item response model (GLLIRM), which represents the item-solving process as a series of dichotomous operations or steps. The GLLIRM assumes that the probability function of the item response is a logistic function of a linear composite of basic parameters which describe the operations, and the…
Directory of Open Access Journals (Sweden)
J. Mutwil
2009-07-01
Full Text Available Shrinkage phenomena during solidification and cooling of hypereutectic aluminium-silicon alloys (AlSi18, AlSi21 have been examined. A vertical shrinkage rod casting with circular cross-section (constant or fixed: tapered has been used as a test sample. Two type of experiments have been conducted: 1 on development of the test sample linear dimension changes (linear expansion/contraction, 2 on development of shrinkage stresses in the test sample. By the linear contraction experiments the linear dimension changes of the test sample and the metal test mould as well a temperature in six points of the test sample have been registered. By shrinkage stresses examination a shrinkage tension force and linear dimension changes of the test sample as well a temperature in three points of the test sample have been registered. Registered time dependences of the test bar and the test mould linear dimension changes have shown, that so-called pre-shrinkage extension has been mainly by mould thermal extension caused. The investigation results have shown that both: the linear contraction as well as the shrinkage stresses development are evident dependent on metal temperature in a warmest region the sample (thermal centre.
Large deformation image classification using generalized locality-constrained linear coding.
Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian
2013-01-01
Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.
Development of a 3D non-linear implicit MHD code
International Nuclear Information System (INIS)
Nicolas, T.; Ichiguchi, K.
2016-06-01
This paper details the on-going development of a 3D non-linear implicit MHD code, which aims at making possible large scale simulations of the non-linear phase of the interchange mode. The goal of the paper is to explain the rationale behind the choices made along the development, and the technical difficulties encountered. At the present stage, the development of the code has not been completed yet. Most of the discussion is concerned with the first approach, which utilizes cartesian coordinates in the poloidal plane. This approach shows serious difficulties in writing the preconditioner, closely related to the choice of coordinates. A second approach, based on curvilinear coordinates, also faced significant difficulties, which are detailed. The third and last approach explored involves unstructured tetrahedral grids, and indicates the possibility to solve the problem. The issue to domain meshing is addressed. (author)
New approach to derive linear power/burnup history input for CANDU fuel codes
International Nuclear Information System (INIS)
Lac Tang, T.; Richards, M.; Parent, G.
2003-01-01
The fuel element linear power / burnup history is a required input for the ELESTRES code in order to simulate CANDU fuel behavior during normal operating conditions and also to provide input for the accident analysis codes ELOCA and SOURCE. The purpose of this paper is to present a new approach to derive 'true', or at least more realistic linear power / burnup histories. Such an approach can be used to recreate any typical bundle power history if only a single pair of instantaneous values of bundle power and burnup, together with the position in the channel, are known. The histories obtained could be useful to perform more realistic simulations for safety analyses for cases where the reference (overpower) history is not appropriate. (author)
A new approach of binary addition and subtraction by non-linear ...
Indian Academy of Sciences (India)
optical domain by exploitation of proper non-linear material-based switching technique. In this communication, the authors extend this technique for both adder and subtractor accommodating the spatial input encoding system.
FEAST: a two-dimensional non-linear finite element code for calculating stresses
International Nuclear Information System (INIS)
Tayal, M.
1986-06-01
The computer code FEAST calculates stresses, strains, and displacements. The code is two-dimensional. That is, either plane or axisymmetric calculations can be done. The code models elastic, plastic, creep, and thermal strains and stresses. Cracking can also be simulated. The finite element method is used to solve equations describing the following fundamental laws of mechanics: equilibrium; compatibility; constitutive relations; yield criterion; and flow rule. FEAST combines several unique features that permit large time-steps in even severely non-linear situations. The features include a special formulation for permitting many finite elements to simultaneously cross the boundary from elastic to plastic behaviour; accomodation of large drops in yield-strength due to changes in local temperature and a three-step predictor-corrector method for plastic analyses. These features reduce computing costs. Comparisons against twenty analytical solutions and against experimental measurements show that predictions of FEAST are generally accurate to ± 5%
Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C
International Nuclear Information System (INIS)
Sheikh, N.M.; Usman, S.R.; Fatima, S.
2002-01-01
Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
A Linear Algebra Framework for Static High Performance Fortran Code Distribution
Directory of Open Access Journals (Sweden)
Corinne Ancourt
1997-01-01
Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.
Analysis and Optimization of Sparse Random Linear Network Coding for Reliable Multicast Services
DEFF Research Database (Denmark)
Tassi, Andrea; Chatzigeorgiou, Ioannis; Roetter, Daniel Enrique Lucani
2016-01-01
Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different random linear network coding (RLNC......) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC...... techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet...
Method for linearizing deflection of a MEMS device using binary electrodes and voltage modulation
Horenstein, Mark N [West Roxbury, MA
2008-06-10
A micromechanical device comprising one or more electronically movable structure sets comprising for each set a first electrode supported on a substrate and a second electrode supported substantially parallel from said first electrode. Said second electrode is movable with respect to said first electrode whereby an electric potential applied between said first and second electrodes causing said second electrode to move relative to said first electrode a distance X, (X), where X is a nonlinear function of said potential, (V). Means are provided for linearizing the relationship between V and X.
Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines
Energy Technology Data Exchange (ETDEWEB)
Batygin, Y.
2004-10-28
A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented.
DEFF Research Database (Denmark)
Fitzek, Frank; Toth, Tamas; Szabados, Áron
2014-01-01
This paper advocates the use of random linear network coding for storage in distributed clouds in order to reduce storage and traffic costs in dynamic settings, i.e. when adding and removing numerous storage devices/clouds on-the-fly and when the number of reachable clouds is limited. We introduce...... techniques do not require us to retrieve the full original information in order to store meaningful information. Our numerical results show a high resilience over a large number of regeneration cycles compared to other approaches....
Random Linear Network Coding is Key to Data Survival in Highly Dynamic Distributed Storage
DEFF Research Database (Denmark)
Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani
2015-01-01
as the number of available nodes varies greatly over time and keeping track of the system's state becomes unfeasible. As a consequence, conventional erasure correction approaches are ill-suited for maintaining data integrity. In this highly dynamic context, random linear network coding (RLNC) provides...... an interesting solution. Our goal is to characterize RLNC's guaranteed data integrity region in terms of the total number of storage devices that need to be available and stored data per device. We compare our fully distributed RLNC approach to centralized (genie aided) and fully decentralized replication...
Throughput vs. Delay in Lossy Wireless Mesh Networks with Random Linear Network Coding
DEFF Research Database (Denmark)
Hundebøll, Martin; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani
2014-01-01
This work proposes a new protocol applying on– the–fly random linear network coding in wireless mesh net- works. The protocol provides increased reliability, low delay, and high throughput to the upper layers, while being oblivious to their specific requirements. This seemingly conflicting goals ...... and evaluated in a real test bed with Raspberry Pi devices. We show that order of magnitude gains in throughput over plain TCP are possible with moderate losses and up to two fold improvement in per packet delay in our results....
Further development of the V-code for recirculating linear accelerator simulations
Energy Technology Data Exchange (ETDEWEB)
Franke, Sylvain; Ackermann, Wolfgang; Weiland, Thomas [Institut fuer Theorie Elektromagnetischer Felder, Technische Universitaet Darmstadt (Germany); Eichhorn, Ralf; Hug, Florian; Kleinmann, Michaela; Platz, Markus [Institut fuer Kernphysik, Technische Universitaet Darmstadt (Germany)
2011-07-01
The Superconducting Darmstaedter LINear Accelerator (S-DALINAC) installed at the institute of nuclear physics (IKP) at TU Darmstadt is designed as a recirculating linear accelerator. The beam is first accelerated up to 10 MeV in the injector beam line. Then it is deflected by 180 degrees into the main linac. The linac section with eight superconducting cavities is passed up to three times, providing a maximal energy gain of 40 MeV on each passage. Due to this recirculating layout it is complicated to find an accurate setup for the various beam line elements. Fast online beam dynamics simulations can advantageously assist the operators because they provide a more detailed insight into the actual machine status. In this contribution further developments of the moment based simulation tool V-code which enables to simulate recirculating machines are presented together with simulation results.
Directory of Open Access Journals (Sweden)
BALTA, H.
2013-05-01
Full Text Available This paper presents a study on the influence of the extrinsic information scaling coefficient value (eic on the bit and frame error rate (BER/FER, for single and double binary turbo codes (S/DBTC decoded with maximum a posteriori (MAP and maximum logarithmic MAP (MaxLogMAP component algorithms. Firstly, we estimate the distance spectrum of the code with the so-called error impulse method (EIM, and we analyze its dependence as well as the dependence of the asymptotic FER on eic. Secondly, we estimate the actual FER using Monte Carlo simulations with eic as a parameter. The comparison of the FER(eic curves obtained by the two methods allows us, on the one hand, to assess the quality of the decoding algorithms, and on the other hand, to estimate the very low BER/FER performance of TCs, where the Monte Carlo method is practically unusable. The results presented also provide a practical guide for the appreciation of the optimal value of the scaling factor, eic. We may notice that also the MAP algorithm performance could be improved using eic<1.
Burke, B. J.; Kruger, S. E.; Hegna, C. C.; Zhu, P.; Snyder, P. B.; Sovinec, C. R.; Howell, E. C.
2010-03-01
A linear benchmark between the linear ideal MHD stability codes ELITE [H. R. Wilson et al., Phys. Plasmas 9, 1277 (2002)], GATO [L. Bernard et al., Comput. Phys. Commun. 24, 377 (1981)], and the extended nonlinear magnetohydrodynamic (MHD) code, NIMROD [C. R. Sovinec et al.., J. Comput. Phys. 195, 355 (2004)] is undertaken for edge-localized (MHD) instabilities. Two ballooning-unstable, shifted-circle tokamak equilibria are compared where the stability characteristics are varied by changing the equilibrium plasma profiles. The equilibria model an H-mode plasma with a pedestal pressure profile and parallel edge currents. For both equilibria, NIMROD accurately reproduces the transition to instability (the marginally unstable mode), as well as the ideal growth spectrum for a large range of toroidal modes (n =1-20). The results use the compressible MHD model and depend on a precise representation of "ideal-like" and "vacuumlike" or "halo" regions within the code. The halo region is modeled by the introduction of a Lundquist-value profile that transitions from a large to a small value at a flux surface location outside of the pedestal region. To model an ideal-like MHD response in the core and a vacuumlike response outside the transition, separate criteria on the plasma and halo Lundquist values are required. For the benchmarked equilibria the critical Lundquist values are 108 and 103 for the ideal-like and halo regions, respectively. Notably, this gives a ratio on the order of 105, which is much larger than experimentally measured values using Te values associated with the top of the pedestal and separatrix. Excellent agreement with ELITE and GATO calculations are made when sharp boundary transitions in the resistivity are used and a small amount of physical dissipation is added for conditions very near and below marginal ideal stability.
Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.
Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo
2015-08-01
Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.
Why a New Code for Novae Evolution and Mass Transfer in Binaries?
Directory of Open Access Journals (Sweden)
G. Shaviv
2015-02-01
Full Text Available One of the most interesting problems in Cataclysmic Variables is the long time scale evolution. This problem appears in long time evolution, which is also very important in the search for the progenitor of SN Ia. The classical approach to overcome this problem in the simulation of novae evolution is to assume: (1 A constant in time, rate of mass transfer. (2 The mass transfer rate that does not vary throughout the life time of the nova, even when many eruptions are considered. Here we show that these assumptions are valid only for a single thermonuclear flash and such a calculation cannot be the basis for extrapolation of the behavior over many flashes. In particular, such calculation cannot be used to predict under what conditions an accreting WD may reach the Chandrasekhar mass and collapse. We report on a new code to attack this problem. The basic idea is to create two parallel processes, one calculating the mass losing star and the other the accreting white dwarf. The two processes communicate continuously with each other and follow the time depended mass loss.
Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D
2018-01-15
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.
Koleva, Bojidarka B; Kolev, Tsonko M; Tsalev, Dimiter L; Spiteller, Michael
2008-01-22
Quantitative infrared (IR) and Raman spectroscopic approach for determination of phenacetin (Phen) and salophen (Salo) in binary solid mixtures with caffeine: phenacetin/caffeine (System 1) and salophen/caffeine (System 2) is presented. Absorbance ratios of 746 cm(-1) or 721 cm(-1) peaks (characteristic for each of determined compounds in the Systems 1 and 2) to 1509 cm(-1) and 1616 cm(-1) (attributed to Phen and Salo, respectively) were used. The IR spectroscopy gives confidence of 98.9% (System 1) and 98.3% (System 2), while the Raman spectroscopic data are with slightly higher confidence of 99.1% for both systems. The limits of detection for the compounds studied were 0.013 and 0.012 mole fraction for IR and Raman methods, respectively. Solid-state linear dichroic infrared (IR-LD) spectral analysis of solid mixtures was carried out with a view to obtaining experimental IR spectroscopic assignment of the characteristic IR bands of both determined compounds. The orientation technique as a nematic liquid crystal suspension was used, combined with the so-called reducing-difference procedure for polarized spectra interpretation. The possibility for obtaining supramolecular stereo structural information for Phen and Salo by comparing spectroscopic and crystallographic data has also been shown. An independent high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) analysis was performed for comparison and validation of vibrational spectroscopy data. Applications to 10 tablets of commercial products APC and Sedalgin are given.
Space-Time Block Coding with Beamforming for Triple-Polarized Uniform Linear Array Systems
Directory of Open Access Journals (Sweden)
Xin Su
2015-01-01
Full Text Available Generally, space-time block coding (STBC and beamforming (BF gains cannot be obtained simultaneously because the former performs well under a low correlated MIMO channel, and the latter works efficiently in an environment with high correlation. However, array systems with antenna polarization have the potential to achieve gains with both techniques simultaneously because the cross-branch links in the system are usually uncorrelated. The cross-array links, on the other hand, can be highly correlated by setting the array element space equal to, or less than, a half-wavelength. This paper proposes a scheme to explore STBC and BF simultaneously via a triple-polarized uniform linear array (TPULA system. The proposed scheme was verified based on the Long Term Evolution-Advanced (LTE-A specification under a polarized MIMO (PMIMO channel model, and therewith, the simulation results confirmed the validity of our proposed scheme.
International Nuclear Information System (INIS)
Lundsager, P.; Krenk, S.
1975-08-01
The static and dynamic response of a cylindrical/ spherical containment to a Boeing 720 impact is computed using 3 different linear elastic computer codes: FINEL, SAP and STARDYNE. Stress and displacement fields are shown together with time histories for a point in the impact zone. The main conclusions from this study are: - In this case the maximum dynamic load factors for stress and displacements were close to 1, but a static analysis alone is not fully sufficient. - More realistic load time histories should be considered. - The main effects seem to be local. The present study does not indicate general collapse from elastic stresses alone. - Further study of material properties at high rates is needed. (author)
Modified linear predictive coding approach for moving target tracking by Doppler radar
Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao
2016-07-01
Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.
International Nuclear Information System (INIS)
Mota, F.; Ortiz, C. J.; Vila, R.
2012-01-01
Irradiation Experimental Area of TechnoFusion will emulate the extreme irradiation fusion conditions in materials by means of three ion accelerators: one used for self-implanting heavy ions (Fe, Si, C,...) to emulate the displacement damage induced by fusion neutrons and the other two for light ions (H and He) to emulate the transmutation induced by fusion neutrons. This Laboratory will play an essential role in the selection of functional materials for DEMO reactor since it will allow reproducing the effects of neutron radiation on fusion materials. Ion irradiation produces little or no residual radioactivity, allowing handling of samples without the need for special precautions. Currently, two different methods are used to calculate the primary displacement damage by neutron irradiation or by ion irradiation. On one hand, the displacement damage doses induced by neutrons are calculated considering the NRT model based on the electronic screening theory of Linhard. This methodology is commonly used since 1975. On the other hand, for experimental research community the SRIM code is commonly used to calculate the primary displacement damage dose induced by ion irradiation. Therefore, both methodologies of primary displacement damage calculation have nothing in common. However, if we want to design ion irradiation experiments capable to emulate the neutron fusion effect in materials, it is necessary to develop comparable methodologies of damage calculation for both kinds of radiation. It would allow us to define better the ion irradiation parameters (Ion, current, Ion energy, dose, etc) required to emulate a specific neutron irradiation environment. Therefore, our main objective was to find the way to calculate the primary displacement damage induced by neutron irradiation and by ion irradiation starting from the same point, that is, the PKA spectrum. In order to emulate the neutron irradiation that would prevail under fusion conditions, two approaches are contemplated: a) on
Directory of Open Access Journals (Sweden)
Hazlehurst Benny
2014-03-01
Full Text Available This paper offers a critique of the ‘binary’ nature of much biblical interpretation and ethical belief in the Church, rejecting simplistic ‘either-or’ approaches to both. Instead there is offered an interpretation of key biblical texts through the lenses of circumstances, needs and motivation. It is argued that, when these factors are taken into account, even for Evangelicals, there is no longer a substantive biblical case against the acceptance of faithful, loving same-sex partnerships and the development of a positive Christian ethic for lesbian, gay, bisexual and transgender people. At the very least, the complexity of the interpretive task must lead to greater openness to and acceptance of those from whom we differ.
LINFLUX-AE: A Turbomachinery Aeroelastic Code Based on a 3-D Linearized Euler Solver
Reddy, T. S. R.; Bakhle, M. A.; Trudell, J. J.; Mehmed, O.; Stefko, G. L.
2004-01-01
This report describes the development and validation of LINFLUX-AE, a turbomachinery aeroelastic code based on the linearized unsteady 3-D Euler solver, LINFLUX. A helical fan with flat plate geometry is selected as the test case for numerical validation. The steady solution required by LINFLUX is obtained from the nonlinear Euler/Navier Stokes solver TURBO-AE. The report briefly describes the salient features of LINFLUX and the details of the aeroelastic extension. The aeroelastic formulation is based on a modal approach. An eigenvalue formulation is used for flutter analysis. The unsteady aerodynamic forces required for flutter are obtained by running LINFLUX for each mode, interblade phase angle and frequency of interest. The unsteady aerodynamic forces for forced response analysis are obtained from LINFLUX for the prescribed excitation, interblade phase angle, and frequency. The forced response amplitude is calculated from the modal summation of the generalized displacements. The unsteady pressures, work done per cycle, eigenvalues and forced response amplitudes obtained from LINFLUX are compared with those obtained from LINSUB, TURBO-AE, ASTROP2, and ANSYS.
Detection optimization using linear systems analysis of a coded aperture laser sensor system
Energy Technology Data Exchange (ETDEWEB)
Gentry, S.M. [Sandia National Labs., Albuquerque, NM (United States). Optoelectronic Design Dept.
1994-09-01
Minimum detectable irradiance levels for a diffraction grating based laser sensor were calculated to be governed by clutter noise resulting from reflected earth albedo. Features on the earth surface caused pseudo-imaging effects on the sensor`s detector arras that resulted in the limiting noise in the detection domain. It was theorized that a custom aperture transmission function existed that would optimize the detection of laser sources against this clutter background. Amplitude and phase aperture functions were investigated. Compared to the diffraction grating technique, a classical Young`s double-slit aperture technique was investigated as a possible optimized solution but was not shown to produce a system that had better clutter-noise limited minimum detectable irradiance. Even though the double-slit concept was not found to have a detection advantage over the slit-grating concept, one interesting concept grew out of the double-slit design that deserved mention in this report, namely the Barker-coded double-slit. This diffractive aperture design possessed properties that significantly improved the wavelength accuracy of the double-slit design. While a concept was not found to beat the slit-grating concept, the methodology used for the analysis and optimization is an example of the application of optoelectronic system-level linear analysis. The techniques outlined here can be used as a template for analysis of a wide range of optoelectronic systems where the entire system, both optical and electronic, contribute to the detection of complex spatial and temporal signals.
Mendez, Rene A.; Claveria, Ruben M.; Orchard, Marcos E.; Silva, Jorge F.
2017-11-01
We present orbital elements and mass sums for 18 visual binary stars of spectral types B to K (five of which are new orbits) with periods ranging from 20 to more than 500 yr. For two double-line spectroscopic binaries with no previous orbits, the individual component masses, using combined astrometric and radial velocity data, have a formal uncertainty of ˜ 0.1 {M}⊙ . Adopting published photometry and trigonometric parallaxes, plus our own measurements, we place these objects on an H-R diagram and discuss their evolutionary status. These objects are part of a survey to characterize the binary population of stars in the Southern Hemisphere using the SOAR 4 m telescope+HRCAM at CTIO. Orbital elements are computed using a newly developed Markov chain Monte Carlo (MCMC) algorithm that delivers maximum-likelihood estimates of the parameters, as well as posterior probability density functions that allow us to evaluate the uncertainty of our derived parameters in a robust way. For spectroscopic binaries, using our approach, it is possible to derive a self-consistent parallax for the system from the combined astrometric and radial velocity data (“orbital parallax”), which compares well with the trigonometric parallaxes. We also present a mathematical formalism that allows a dimensionality reduction of the feature space from seven to three search parameters (or from 10 to seven dimensions—including parallax—in the case of spectroscopic binaries with astrometric data), which makes it possible to explore a smaller number of parameters in each case, improving the computational efficiency of our MCMC code. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
Biancalani, A.; Bottino, A.; Ehrlacher, C.; Grandgirard, V.; Merlo, G.; Novikau, I.; Qiu, Z.; Sonnendrücker, E.; Garbet, X.; Görler, T.; Leerink, S.; Palermo, F.; Zarzoso, D.
2017-06-01
The linear properties of the geodesic acoustic modes (GAMs) in tokamaks are investigated by means of the comparison of analytical theory and gyrokinetic numerical simulations. The dependence on the value of the safety factor, finite-orbit-width of the ions in relation to the radial mode width, magnetic-flux-surface shaping, and electron/ion mass ratio are considered. Nonuniformities in the plasma profiles (such as density, temperature, and safety factor), electro-magnetic effects, collisions, and the presence of minority species are neglected. Also, only linear simulations are considered, focusing on the local dynamics. We use three different gyrokinetic codes: the Lagrangian (particle-in-cell) code ORB5, the Eulerian code GENE, and semi-Lagrangian code GYSELA. One of the main aims of this paper is to provide a detailed comparison of the numerical results and analytical theory, in the regimes where this is possible. This helps understanding better the behavior of the linear GAM dynamics in these different regimes, the behavior of the codes, which is crucial in the view of a future work where more physics is present, and the regimes of validity of each specific analytical dispersion relation.
Ishitani, Terry T.
2010-01-01
This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who…
Dattoli, Giuseppe
2005-01-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. A code devoted to the analysis of this type of problems should be fast and reliable: conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problem in accelerators. The extension of these method to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators implemented numerically in C++. We show that the integration procedure is capable of reproducing the onset of an instability and effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, parametric studies a...
Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms
Directory of Open Access Journals (Sweden)
Ahmed Azouaoui
2012-01-01
Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.
Development of a relativistic Particle In Cell code PARTDYN for linear accelerator beam transport
Phadte, D.; Patidar, C. B.; Pal, M. K.
2017-04-01
A relativistic Particle In Cell (PIC) code PARTDYN is developed for the beam dynamics simulation of z-continuous and bunched beams. The code is implemented in MATLAB using its MEX functionality which allows both ease of development as well higher performance similar to a compiled language like C. The beam dynamics calculations carried out by the code are compared with analytical results and with other well developed codes like PARMELA and BEAMPATH. The effect of finite number of simulation particles on the emittance growth of intense beams has been studied. Corrections to the RF cavity field expressions were incorporated in the code so that the fields could be calculated correctly. The deviations of the beam dynamics results between PARTDYN and BEAMPATH for a cavity driven in zero-mode have been discussed. The beam dynamics studies of the Low Energy Beam Transport (LEBT) using PARTDYN have been presented.
Investigation of Non-linear Chirp Coding for Improved Second Harmonic Pulse Compression.
Arif, Muhammad; Ali, Muhammad Asim; Shaikh, Muhammad Mujtaba; Freear, Steven
2017-08-01
Non-linear frequency-modulated (NLFM) chirp coding was investigated to improve the pulse compression of the second harmonic chirp signal by reducing the range side lobe level. The problem of spectral overlap between the fundamental component and second harmonic component (SHC) was also investigated. Therefore, two methods were proposed: method I for the non-overlap condition and method II with the pulse inversion technique for the overlap harmonic condition. In both methods, the performance of the NLFM chirp was compared with that of the reference LFM chirp signals. Experiments were performed using a 2.25 MHz transducer mounted coaxially at a distance of 5 cm with a 1 mm hydrophone in a water tank, and the peak negative pressure of 300 kPa was set at the receiver. Both simulations and experimental results revealed that the peak side lobe level (PSL) of the compressed SHC of the NLFM chirp was improved by at least 13 dB in method I and 5 dB in method II when compared with the PSL of LFM chirps. Similarly, the integrated side lobe level (ISL) of the compressed SHC of the NLFM chirp was improved by at least 8 dB when compared with the ISL of LFM chirps. In both methods, the axial main lobe width of the compressed NLFM chirp was comparable to that of the LFM signals. The signal-to-noise ratio of the SHC of NLFM was improved by as much as 0.8 dB, when compared with the SHC of the LFM signal having the same energy level. The results also revealed the robustness of the NLFM chirp under a frequency-dependent attenuation of 0.5 dB/cm·MHz up to a penetration depth of 5 cm and a Doppler shift up to 12 kHz. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Cho, Sun-Joo; Brown-Schmidt, Sarah; Lee, Woo-Yeol
2018-02-07
As a method to ascertain person and item effects in psycholinguistics, a generalized linear mixed effect model (GLMM) with crossed random effects has met limitations in handing serial dependence across persons and items. This paper presents an autoregressive GLMM with crossed random effects that accounts for variability in lag effects across persons and items. The model is shown to be applicable to intensive binary time series eye-tracking data when researchers are interested in detecting experimental condition effects while controlling for previous responses. In addition, a simulation study shows that ignoring lag effects can lead to biased estimates and underestimated standard errors for the experimental condition effects.
National Research Council Canada - National Science Library
Yablonovitch, Eli
2000-01-01
.... The equipment purchased under this grant has permitted UCLA to purchase a number of broad-band optical components, including especially some unique code division multiplexing filters that permitted...
Experimental evaluation of a non-linear coded excitation method for contrast imaging
Borsboom, Jerome; Chin, Chien Ting; de Jong, N.
2004-01-01
Previously, we have shown that for a single bubble, using chirps as the excitation signal improves both the linear and the non-linear response. Computer simulations of randomly distributed contrast agent bubbles show an increase of 10–13 dB in response when comparing pulse excitations with chirp
Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi
This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-01
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.
International Nuclear Information System (INIS)
Vanderhaegen, D.; Deutsch, C.
1988-01-01
Scattering effects are considered for radiative transfer within randomly distributed and binary mixtures in one dimension. The most general formalism is developed within the framework of the invariant imbedding method. The length L of the random sample thus appears as a new variable. One transmission coefficient T(L) suffices to specify locally the intensities. By analogy with the homogeneous situation, one introduces an effective opacity with = (1 + σ eff L) -1 fulfilling σ eff ≤ = p 0 σ 0 + p 1 σ 1 (0 and 1 respectively refer to the components involved in the mixture). Equality is reached when L 0, ∞. Otherwise, σ eff experiences a deep transmission window
International Nuclear Information System (INIS)
Toprak, A. Emre; Guelay, F. Guelten; Ruge, Peter
2008-01-01
Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 mx7.80 m = 127.90 m 2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher
Directory of Open Access Journals (Sweden)
Sandeep J. Joseph
2010-03-01
Full Text Available Multi-class cancer classification based on microarray data is described. A generalized output-coding scheme based on One Versus One (OVO combined with Latent Variable Model (LVM is used. Results from the proposed One Versus One (OVO output- coding strategy is compared with the results obtained from the generalized One Versus All (OVA method and their efficiencies of using them for multi-class tumor classification have been studied. This comparative study was done using two microarray gene expression data: Global Cancer Map (GCM dataset and brain cancer (BC dataset. Primary feature selection was based on fold change and penalized t-statistics. Evaluation was conducted with varying feature numbers. The OVO coding strategy worked quite well with the BC data, while both OVO and OVA results seemed to be similar for the GCM data. The selection of output coding methods for combining binary classifiers for multi-class tumor classification depends on the number of tumor types considered, the discrepancies between the tumor samples used for training as well as the heterogeneity of expression within the cancer subtypes used as training data.
Validation of favor code linear elastic fracture solutions for finite-length flaw geometries
International Nuclear Information System (INIS)
Dickson, T.L.; Keeney, J.A.; Bryson, J.W.
1995-01-01
One of the current tasks within the US Nuclear Regulatory Commission (NRC)-funded Heavy Section Steel Technology Program (HSST) at Oak Ridge National Laboratory (ORNL) is the continuing development of the FAVOR (Fracture, analysis of Vessels: Oak Ridge) computer code. FAVOR performs structural integrity analyses of embrittled nuclear reactor pressure vessels (RPVs) with stainless steel cladding, to evaluate compliance with the applicable regulatory criteria. Since the initial release of FAVOR, the HSST program has continued to enhance the capabilities of the FAVOR code. ABAQUS, a nuclear quality assurance certified (NQA-1) general multidimensional finite element code with fracture mechanics capabilities, was used to generate a database of stress-intensity-factor influence coefficients (SIFICs) for a range of axially and circumferentially oriented semielliptical inner-surface flaw geometries applicable to RPVs with an internal radius (Ri) to wall thickness (w) ratio of 10. This database of SIRCs has been incorporated into a development version of FAVOR, providing it with the capability to perform deterministic and probabilistic fracture analyses of RPVs subjected to transients, such as pressurized thermal shock (PTS), for various flaw geometries. This paper discusses the SIFIC database, comparisons with other investigators, and some of the benchmark verification problem specifications and solutions
International Nuclear Information System (INIS)
Cummins, J.D.
1965-02-01
With several white noise sources the various transmission paths of a linear multivariable system may be determined simultaneously. This memorandum considers the restrictions on pseudo-random two state sequences to effect simultaneous identification of several transmission paths and the consequential rejection of cross-coupled signals in linear multivariable systems. The conditions for simultaneous identification are established by an example, which shows that the integration time required is large i.e. tends to infinity, as it does when white noise sources are used. (author)
Directory of Open Access Journals (Sweden)
Ririn Kusumawati
2016-05-01
In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.
Joseph, Sandeep J; Robbins, Kelly R; Zhang, Wensheng; Rekaya, Romdhane
2010-03-10
Multi-class cancer classification based on microarray data is described. A generalized output-coding scheme based on One Versus One (OVO) combined with Latent Variable Model (LVM) is used. Results from the proposed One Versus One (OVO) outputcoding strategy is compared with the results obtained from the generalized One Versus All (OVA) method and their efficiencies of using them for multi-class tumor classification have been studied. This comparative study was done using two microarray gene expression data: Global Cancer Map (GCM) dataset and brain cancer (BC) dataset. Primary feature selection was based on fold change and penalized t-statistics. Evaluation was conducted with varying feature numbers. The OVO coding strategy worked quite well with the BC data, while both OVO and OVA results seemed to be similar for the GCM data. The selection of output coding methods for combining binary classifiers for multi-class tumor classification depends on the number of tumor types considered, the discrepancies between the tumor samples used for training as well as the heterogeneity of expression within the cancer subtypes used as training data.
Czech Academy of Sciences Publication Activity Database
Dragoescu, D.; Teodorescu, M.; Barhala, A.; Wichterle, Ivan
2003-01-01
Roč. 68, č. 7 (2003), s. 1175-1192 ISSN 0010-0765 R&D Projects: GA ČR GA104/03/1555 Institutional research plan: CEZ:AV0Z4072921 Keywords : group contribution model * thermodynamics * chloroalkanes-linear ketones Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.041, year: 2003
Byzantine Broadcast in Point-to-Point Networks using Local Linear Coding
2011-11-03
Perfectly-secure mpc with linear communication com- plexity. In TCC, 2008. [3] P. Berman , J. A. Garay, and K. J. Perry. Bit optimal distributed...problem: Alice and Bob each is given input x and y, respectively, need to check if x = y by communicating with each other. It has been proved that at...least L bits must be communicated betweenAlice and Bob , in theworst case, to solve 2-party equality for L-bit values. It then follows that even if the
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Directory of Open Access Journals (Sweden)
Dongyul Lee
2014-01-01
Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Collett, David
2002-01-01
INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...
Kodo: An Open and Research Oriented Network Coding Library
DEFF Research Database (Denmark)
Pedersen, Morten Videbæk; Heide, Janus; Fitzek, Frank
2011-01-01
We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially a...
International Nuclear Information System (INIS)
Tayal, M.
1987-01-01
Structures often operate at elevated temperatures. Temperature calculations are needed so that the design can accommodate thermally induced stresses and material changes. A finite element computer called FEAT has been developed to calculate temperatures in solids of arbitrary shapes. FEAT solves the classical equation for steady state conduction of heat. The solution is obtained for two-dimensional (plane or axisymmetric) or for three-dimensional problems. Gap elements are use to simulate interfaces between neighbouring surfaces. The code can model: conduction; internal generation of heat; prescribed convection to a heat sink; prescribed temperatures at boundaries; prescribed heat fluxes on some surfaces; and temperature-dependence of material properties like thermal conductivity. The user has a option of specifying the detailed variation of thermal conductivity with temperature. For convenience to the nuclear fuel industry, the user can also opt for pre-coded values of thermal conductivity, which are obtained from the MATPRO data base (sponsored by the U.S. Nuclear Regulatory Commission). The finite element method makes FEAT versatile, and enables it to accurately accommodate complex geometries. The optional link to MATPRO makes it convenient for the nuclear fuel industry to use FEAT, without loss of generality. Special numerical techniques make the code inexpensive to run, for the type of material non-linearities often encounter in the analysis of nuclear fuel. The code, however, is general, and can be used for other components of the reactor, or even for non-nuclear systems. The predictions of FEAT have been compared against several analytical solutions. The agreement is usually better than 5%. Thermocouple measurements show that the FEAT predictions are consistent with measured changes in temperatures in simulated pressure tubes. FEAT was also found to predict well, the axial variations in temperatures in the end-pellets(UO 2 ) of two fuel elements irradiated
Li, Peng; Redden, David T
2015-04-23
Small number of clusters and large variation of cluster sizes commonly exist in cluster-randomized trials (CRTs) and are often the critical factors affecting the validity and efficiency of statistical analyses. F tests are commonly used in the generalized linear mixed model (GLMM) to test intervention effects in CRTs. The most challenging issue for the approximate Wald F test is the estimation of the denominator degrees of freedom (DDF). Some DDF approximation methods have been proposed, but their small sample performances in analysing binary outcomes in CRTs with few heterogeneous clusters are not well studied. The small sample performances of five DDF approximations for the F test are compared and contrasted under CRT frameworks with simulations. Specifically, we illustrate how the intraclass correlation (ICC), sample size, and the variation of cluster sizes affect the type I error and statistical power when different DDF approximation methods in GLMM are used to test intervention effect in CRTs with binary outcomes. The results are also illustrated using a real CRT dataset. Our simulation results suggest that the Between-Within method maintains the nominal type I error rates even when the total number of clusters is as low as 10 and is robust to the variation of the cluster sizes. The Residual and Containment methods have inflated type I error rates when the cluster number is small (<30) and the inflation becomes more severe with increased variation in cluster sizes. In contrast, the Satterthwaite and Kenward-Roger methods can provide tests with very conservative type I error rates when the total cluster number is small (<30) and the conservativeness becomes more severe as variation in cluster sizes increases. Our simulations also suggest that the Between-Within method is statistically more powerful than the Satterthwaite or Kenward-Roger method in analysing CRTs with heterogeneous cluster sizes, especially when the cluster number is small. We conclude that the
Non-linear Fokker-Planck code study of high ion temperature plasma in JT-60U
International Nuclear Information System (INIS)
Yamagiwa, M.; Ishida, S.; Koga, J.
1997-01-01
A non-linear Fokker-Planck code is applied to the study of a JT-60U hot ion plasma in which the experimentally measured carbon impurity temperature reached up to 45 keV with 90 keV deuterium beam injection. A non-Maxwellian deuteron distribution function is obtained numerically and the deuteron bulk temperature, which has not been determined experimentally, is evaluated from the slope of the energy spectrum. It is found that the deuteron bank temperature can exceed the carbon temperature, indicating that the impurity temperature measurement does not lead to overestimation of the ion temperature. The deuteron effective temperature based on the average energy is, however, found to be almost the same as the carbon temperature. The DD fusion reactivity is also around a value given by the Maxwellian distribution with its temperature equal to the carbon temperature. Consequently, the carbon temperature may possibly be regarded as an equivalent ion temperature. (author)
International Nuclear Information System (INIS)
Vadlamani, Srinath; Kruger, Scott; Austin, Travis
2008-01-01
Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems. For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.
Binary Stochastic Representations for Large Multi-class Classification
Gerald, Thomas
2017-10-23
Classification with a large number of classes is a key problem in machine learning and corresponds to many real-world applications like tagging of images or textual documents in social networks. If one-vs-all methods usually reach top performance in this context, these approaches suffer of a high inference complexity, linear w.r.t. the number of categories. Different models based on the notion of binary codes have been proposed to overcome this limitation, achieving in a sublinear inference complexity. But they a priori need to decide which binary code to associate to which category before learning using more or less complex heuristics. We propose a new end-to-end model which aims at simultaneously learning to associate binary codes with categories, but also learning to map inputs to binary codes. This approach called Deep Stochastic Neural Codes (DSNC) keeps the sublinear inference complexity but do not need any a priori tuning. Experimental results on different datasets show the effectiveness of the approach w.r.t. baseline methods.
Polynomial weights and code constructions
DEFF Research Database (Denmark)
Massey, J; Costello, D; Justesen, Jørn
1973-01-01
polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... that are subcodes of the binary Reed-Muller codes and can be very simply instrumented, 3) a new class of constacyclic codes that are subcodes of thep-ary "Reed-Muller codes," 4) two new classes of binary convolutional codes with large "free distance" derived from known binary cyclic codes, 5) two new classes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm....
Introduction to generalized linear models
Dobson, Annette J
2008-01-01
Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...
Trzaskuś-Żak, Beata; Żak, Andrzej
2013-09-01
This paper presents a method of binary linear programming for the selection of customers to whom a rebate will be offered. In return for the rebate, the customer undertakes payment of its debt to the mine by the deadline specified. In this way, the company is expected to achieve the required rate of collection of receivables. This, of course, will be at the expense of reduced revenue, which can be made up for by increased sales. Customer selection was done in order to keep the overall cost to the mine of the offered rebates as low as possible: where: KcR - total cost of rebates granted by the mine; kj - cost of granting the rebate to a jth customer; xj - decision variables; j = 1, …, n - particular customers. The calculations were performed with the Solver tool (Excel programme). The cost of rebates was calculated from the formula: kj = ΔPj - Kk(j) where: ΔPj - difference in revenues from customer j; Kk(j)- cost of the so-called trade credit with regard to customer j. The cost of the trade credit was calculated from the formula: where r - interest rate on the bank loan, % ts - collection time for the receivable in days (e.g. t1 = 30, t2 = 45,…, t12 = 360); Ns - value of the receivable at collection date ts. This paper presents the general model of linear binary programming for managing receivables by granting rebates. The model, in its general form, aims at: - minimising the objective function: - with the restrictions: - and: xj ɛ (0,1) where: Ntji - value of the timely payments of a customer j in an ith month of the period analysed; Nnji - value of the overdue receivables of a customer j in an ith month of the period analysed; q - the assumed minimum percentage of timely payments collected; Ni - summarised value of all receivables in the month i; m - the number of months in the period analysed. The general model was used for application to the example of the operating Mine X. Furthermore, the study has been extended through the presentation of a binary
DEFF Research Database (Denmark)
Tömösközi, Máté; Fitzek, Frank; Roetter, Daniel Enrique Lucani
2014-01-01
. This metric captures the elapsed time between (network) encoding RTP packets and completely decoding the packets in-order on the receiver side. Our solutions are implemented and evaluated on a point-to-point link between a Raspberry Pi device and a network (de)coding enabled software running on a regular PC...
1984-04-01
improvement In the efficiency with whic these linear system are solved will directly improve the perfornumato the Integrator. Fortunately. the ginar...used in stiff-ODE codes; we believe that this may improve the effciency of these codes as well. The outline of the remainder of this paper is as...iHolder continuos with cpoxmet p at , 2 and (c) yak, y.e th weak order t leas l+p 0. p s, ifF is Holder continuous with eponot p 4t ya and - 0 with
International Nuclear Information System (INIS)
Huandong, Chen; Xiaoying, Zhang
2015-01-01
Highlights: • Combining equations to have a more stable and faster convergence solution. • Taking account of non-linear conduction of fuel rods. • Validating code with COBRA code. • Applying code to the small reactor “MUTSU” and comparing result with its design conditions. - Abstract: For purpose of thermal hydraulic analysis in small nuclear reactors, a sub-channel code with an improved convergence has been developed based on the homogenous flowing model. A combined lateral momentum equation coupling with continuity and axial momentum equation has been used to substitute the original lateral momentum equation. The Gauss iteration method has been adopted to solve the Kirchhoff's transformation equation of nonlinear heat conduction of fuel rod, a temperature dependent conducting has been considered. The code has been validated by using experimental data from the NUPEC PWR Sub-channel and Bundle Tests (PSBT) and then applied to the “MUTSU” reactor. Results show that the code can predict the experimental data with acceptable accuracy and has ability to analyze the small PWR reactor
National Research Council Canada - National Science Library
Lynch, Christopher S; Landis, Chad
2006-01-01
.... The code has been used to conduct simulations of geometries in which the field distribution is inhomogeneous and results in local consentrations such as for interdigitated electrodes, for cofired...
Division Unit for Binary Integer Decimals
DEFF Research Database (Denmark)
Lang, Tomas; Nannarelli, Alberto
2009-01-01
In this work, we present a radix-10 division unit that is based on the digit-recurrence algorithm and implements binary encodings (binary integer decimal or BID) for significands. Recent decimal division designs are all based on the binary coded decimal (BCD) encoding. We adapt the radix-10 digit...
Energy Technology Data Exchange (ETDEWEB)
Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain)], E-mail: mvilches@ugr.es; Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain); Guerrero, R. [Servicio de Radiofisica, Hospital Universitario ' San Cecilio' , Avda. Dr. Oloriz, 16, E-18012 Granada (Spain); Anguiano, M.; Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2007-09-21
When a therapeutic electron linear accelerator is simulated using a Monte Carlo (MC) code, the tuning of the initial spectra and the renormalization of dose (e.g., to maximum axial dose) constitute a common practice. As a result, very similar depth dose curves are obtained for different MC codes. However, if renormalization is turned off, the results obtained with the various codes disagree noticeably. The aim of this work is to investigate in detail the reasons of this disagreement. We have found that the observed differences are due to non-negligible differences in the angular scattering of the electron beam in very thin slabs of dense material (primary foil) and thick slabs of very low density material (air). To gain insight, the effects of the angular scattering models considered in various MC codes on the dose distribution in a water phantom are discussed using very simple geometrical configurations for the LINAC. The MC codes PENELOPE 2003, PENELOPE 2005, GEANT4, GEANT3, EGSnrc and MCNPX have been used.
Chang, Chau-Lyan
2003-01-01
During the past two decades, our understanding of laminar-turbulent transition flow physics has advanced significantly owing to, in a large part, the NASA program support such as the National Aerospace Plane (NASP), High-speed Civil Transport (HSCT), and Advanced Subsonic Technology (AST). Experimental, theoretical, as well as computational efforts on various issues such as receptivity and linear and nonlinear evolution of instability waves take part in broadening our knowledge base for this intricate flow phenomenon. Despite all these advances, transition prediction remains a nontrivial task for engineers due to the lack of a widely available, robust, and efficient prediction tool. The design and development of the LASTRAC code is aimed at providing one such engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. LASTRAC was written from scratch based on the state-of-the-art numerical methods for stability analysis and modem software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory (LST) or linear parabolized stability equations (LPSE) method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. Coupled with the built-in receptivity model that is currently under development, the nonlinear PSE method offers a synergistic approach to predict transition onset for a given disturbance environment based on first principles. This paper describes the governing equations, numerical methods, code development, and case studies for the current release of LASTRAC. Practical applications of LASTRAC are demonstrated for linear stability calculations, N-factor transition correlation, non-linear breakdown simulations, and controls of stationary crossflow instability in supersonic swept wing boundary
DEFF Research Database (Denmark)
Tömösközi, Máté; Fitzek, Frank; Roetter, Daniel Enrique Lucani
2015-01-01
decoding of the packets on the receiver side while playing out the video recording contained in the payload. Our solutions are implemented and evaluated on serially connected Raspberry Pi devices and a network (de)coding enabled software running on a regular PC. We find that the recoding relays work...
Full-Diversity Space-Time Error Correcting Codes with Low-Complexity Receivers
Directory of Open Access Journals (Sweden)
Hassan MohamadSayed
2011-01-01
Full Text Available We propose an explicit construction of full-diversity space-time block codes, under the constraint of an error correction capability. Furthermore, these codes are constructed in order to be suitable for a serial concatenation with an outer linear forward error correcting (FEC code. We apply the binary rank criterion, and we use the threaded layering technique and an inner linear FEC code to define a space-time error-correcting code. When serially concatenated with an outer linear FEC code, a product code can be built at the receiver, and adapted iterative receiver structures can be applied. An optimized hybrid structure mixing MMSE turbo equalization and turbo product code decoding is proposed. It yields reduced complexity and enhanced performance compared to previous existing structures.
Energy Technology Data Exchange (ETDEWEB)
Watts, H.A.
1977-06-01
Numerical methods (which are based on orthogonal Householder transformations) and computer codes for solving linear least-squares problems are described. Over-determined, under-determined, and equality-constrained least-squares problems are examined. Brief instructions for using the codes are provided in the form of computer listings of the introductory comments from the codes. Furthermore, sample programs illustrate usage of the codes and demonstrate their performance on various test problems.
Generalized concatenated quantum codes
International Nuclear Information System (INIS)
Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei
2009-01-01
We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.
Studies of Gas Disks in Binary Systems
de Val-Borro, Miguel
There are over 300 exoplanets detected through radial velocity surveys and photometric studies showing a tremendous variety of masses, compositions and orbital parameters. Understanding the way these planets formed and evolved within the circumstellar disks they were initially embedded in is a crucial issue. In the first part of this thesis we study the physical interaction between a gaseous protoplanetary disk and an embedded planet using numerical simulations. In order to trust the results from simulations it is important to compare different methods. However, the standard test problems for hydrodynamic codes differ considerably from the case of a protoplanetary disk interacting with an embedded planet. We have carried out a code comparison in which the problem of a massive planet in a protoplanetary disk was studied with various numerical schemes. We compare the surface density, potential vorticity and azimuthally averaged density profiles at several times. There is overall good agreement between our codes for Neptune and Jupiter-sized planets. We performed simulations for each planet in an inviscid disk and including physical viscosity. The surface density profiles agree within about 5% for the grid-based schemes while the particle codes have less resolution in the low density regions and weaker spiral wakes. In Paper II, we study hydrodynamical instabilities in disks with planets. Vortices are generated close to the gap in our numerical models in agreement with the linear modal analysis. The vortices exert strong perturbations on the planet as they move along the gap and can change its migration rate. In addition, disk viscosity can be modified by the presence of vortices. The last part of this thesis studies the mass transfer in symbiotic binaries and close T Tauri binary systems. Our simulations of gravitationally focused wind accretion in binary systems show the formation of stream flows and enhanced accretion rates onto the compact component.
Shore, S N; van den Heuvel, EPJ
1994-01-01
This volume contains lecture notes presented at the 22nd Advanced Course of the Swiss Society for Astrophysics and Astronomy. The contributors deal with symbiotic stars, cataclysmic variables, massive binaries and X-ray binaries, in an attempt to provide a better understanding of stellar evolution.
Directory of Open Access Journals (Sweden)
M.A. Cremasco
2003-06-01
Full Text Available The simulated moving bed (SMB is potentially an economical method for the separation and purification of natural products because it is a continuous processes and can achieve higher productivity, higher product recovery, and higher purity than batch chromatographic processes. Despite the advantages of SMB, one of the challenges is to specify its zone flow rates and switching time. In this case it is possible to use the standing wave analysis. In this method, in a binary system, when certain concentration waves are confined to specific zones, high product purity and yield can be assured. Appropriate zone flow rates, zone lengths and step time are chosen to achieve standing waves. In this study the effects of selectivity on yield, throughput, solvent consumption, port switching time, and product purity for a binary system are analyzed. The results show that for a given selectivity the maximum throughput decreases with increasing yield, while solvent consumption and port switching time increase with increasing yield. To achieve the same purity and yield, a system with higher selectivity has a higher throughput and lower solvent consumption.
International Nuclear Information System (INIS)
Aspinall, J.
1982-01-01
A computational method was developed which alleviates the need for lengthy parametric scans as part of a design process. The method makes use of a least squares algorithm to find the optimal value of a parameter vector. Optimal is defined in terms of a utility function prescribed by the user. The placement of the vertical field coils of a torsatron is such a non linear problem
International Nuclear Information System (INIS)
Mar'yanov, B.M.; Zarubin, A.G.; Shumar, S.V.
2003-01-01
A method is proposed for the computer processing of curve of differential potentiometric titration of a binary mixture of heterovalent ions using precipitation reactions. The method is based on the transformation of the titration curve to segment-line characteristics, whose parameters (within the accuracy of the least-squares method) determine the sequence of the equivalence points and solubility products of the resulting precipitation. The method is applied to the titration of Ag(I)-Cd)II), Hg(II)-Te(IV), and Cd(II)-Te(IV) mixtures by a sodium diethyldithiocarbamate solution with membrane sulfide and glassy carbon indicator electrodes. For 4 to 11 mg of the analyte in 50 ml of the solution, RSD varies from 1 to 9% [ru
Error Correcting Codes -34 ...
Indian Academy of Sciences (India)
the reading of data from memory the receiving process. Protecting data in computer memories was one of the earliest applications of Hamming codes. We now describe the clever scheme invented by Hamming in 1948. To keep things simple, we describe the binary length 7 Hamming code. Encoding in the Hamming Code.
Compressing Binary Decision Diagrams
DEFF Research Database (Denmark)
Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter
2008-01-01
The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to ......-2 bits per node. Empirical results for our compression technique are presented, including comparisons with previously introduced techniques, showing that the new technique dominate on all tested instances......The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...
Compressing Binary Decision Diagrams
DEFF Research Database (Denmark)
Rune Hansen, Esben; Srinivasa Rao, S.; Tiedemann, Peter
The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to ......-2 bits per node. Empirical results for our compression technique are presented, including comparisons with previously introduced techniques, showing that the new technique dominate on all tested instances.......The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...
Methodology for bus layout for topological quantum error correcting codes
Energy Technology Data Exchange (ETDEWEB)
Wosnitzka, Martin; Pedrocchi, Fabio L.; DiVincenzo, David P. [RWTH Aachen University, JARA Institute for Quantum Information, Aachen (Germany)
2016-12-15
Most quantum computing architectures can be realized as two-dimensional lattices of qubits that interact with each other. We take transmon qubits and transmission line resonators as promising candidates for qubits and couplers; we use them as basic building elements of a quantum code. We then propose a simple framework to determine the optimal experimental layout to realize quantum codes. We show that this engineering optimization problem can be reduced to the solution of standard binary linear programs. While solving such programs is a NP-hard problem, we propose a way to find scalable optimal architectures that require solving the linear program for a restricted number of qubits and couplers. We apply our methods to two celebrated quantum codes, namely the surface code and the Fibonacci code. (orig.)
Carlet, Claude; Mesnager, Sihem; Tang, Chunming; Qi, Yanfeng
2017-01-01
Linear complementary pairs (LCP) of codes play an important role in armoring implementations against side-channel attacks and fault injection attacks. One of the most common ways to construct LCP of codes is to use Euclidean linear complementary dual (LCD) codes. In this paper, we first introduce the concept of linear codes with $\\sigma$ complementary dual ($\\sigma$-LCD), which includes known Euclidean LCD codes, Hermitian LCD codes, and Galois LCD codes. As Euclidean LCD codes, $\\sigma$-LCD ...
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
Energy Technology Data Exchange (ETDEWEB)
Delbecq, J.M
1999-07-01
The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)
DEFF Research Database (Denmark)
Høholdt, Tom; Beelen, Peter; Ghorpade, Sudhir Ramakant
2010-01-01
We consider a new class of linear codes, called affine Grassmann codes. These can be viewed as a variant of generalized Reed-Muller codes and are closely related to Grassmann codes.We determine the length, dimension, and the minimum distance of any affine Grassmann code. Moreover, we show that af...
International Nuclear Information System (INIS)
Corral B, J. R.
2015-01-01
Humans should avoid exposure to radiation, because the consequences are harmful to health. Although there are different emission sources of radiation, generated by medical devices they are usually of great interest, since people who attend hospitals are exposed in one way or another to ionizing radiation. Therefore, is important to conduct studies on radioactive levels that are generated in hospitals, as a result of the use of medical equipment. To determine levels of exposure speed of a radioactive facility there are different methods, including the radiation detector and computational method. This thesis uses the computational method. With the program MCNP5 was determined the speed of the radiation exposure in the radiotherapy room of Cancer Center of ABC Hospital in Mexico City. In the application of computational method, first the thicknesses of the shields were calculated, using variables as: 1) distance from the shield to the source; 2) desired weekly equivalent dose; 3) weekly total dose equivalent emitted by the equipment; 4) occupation and use factors. Once obtained thicknesses, we proceeded to model the bunker using the mentioned program. The program uses the Monte Carlo code to probabilistic ally determine the phenomena of interaction of radiation with the shield, which will be held during the X-ray emission from the linear accelerator. The results of computational analysis were compared with those obtained experimentally with the detection method, for which was required the use of a Geiger-Muller counter and the linear accelerator was programmed with an energy of 19 MV with 500 units monitor positioning the detector in the corresponding boundary. (Author)
Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.
Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang
2017-11-01
Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.
Sommariva, C.; Nardon, E.; Beyer, P.; Hoelzl, M.; Huijsmans, G. T. A.; van Vugt, D.; Contributors, JET
2018-01-01
In order to contribute to the understanding of runaway electron generation mechanisms during tokamak disruptions, a test particle tracker is introduced in the JOREK 3D non-linear MHD code, able to compute both full and guiding center relativistic orbits. Tests of the module show good conservation of the invariants of motion and consistency between full orbit and guiding center solutions. A first application is presented where test electron confinement properties are investigated in a massive gas injection-triggered disruption simulation in JET-like geometry. It is found that electron populations initialised before the thermal quench (TQ) are typically not fully deconfined in spite of the global stochasticity of the magnetic field during the TQ. The fraction of ‘survivors’ decreases from a few tens down to a few tenths of percent as the electron energy varies from 1 keV to 10 MeV. The underlying mechanism for electron ‘survival’ is the prompt reformation of closed magnetic surfaces at the plasma core and, to a smaller extent, the subsequent reappearance of a magnetic surface at the edge. It is also found that electrons are less deconfined at 10 MeV than at 1 MeV, which appears consistent with a phase averaging effect due to orbit shifts at high energy.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
Smith, L.M.; Hochstedler, R.D.
1997-01-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code)
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
Smith, L. M.; Hochstedler, R. D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
Evolution of Supermassive Black-Hole Binaries
Milosavljevic, M.; Merritt, D.
2000-10-01
Binary supermassive black holes are expected to form in galactic nuclei following galaxy mergers. We report large-scale N-body simulations using the Aarseth/Spurzem parallel code NBODY6++ of the formation and evolution of such binaries. Initial conditions are drawn from a tree-code simulation of the merger of two spherical galaxies with ρ ~ r-2 density cusps (Cruz & Merritt, AAS Poster). Once the two black holes form a bound pair at the center of the merged galaxies, the evolution is continued using NBODY6++ at much higher resolution. Its exact force calculations generate faithful binary dynamics until the onset of gravity wave-dominated dissipation. We discuss the binary hardening rate, the amplitude of the binary's wandering, and the evolution of the structure of the galactic stellar nucleus.
Eliciting Subjective Probabilities with Binary Lotteries
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd
objective probabilities. Drawing a sample from the same subject population, we find evidence that the binary lottery procedure induces linear utility in a subjective probability elicitation task using the Quadratic Scoring Rule. We also show that the binary lottery procedure can induce direct revelation...
EVOLUTION OF THE BINARY FRACTION IN DENSE STELLAR SYSTEMS
International Nuclear Information System (INIS)
Fregeau, John M.; Ivanova, Natalia; Rasio, Frederic A.
2009-01-01
Using our recently improved Monte Carlo evolution code, we study the evolution of the binary fraction in globular clusters. In agreement with previous N-body simulations, we find generally that the hard binary fraction in the core tends to increase with time over a range of initial cluster central densities for initial binary fractions ∼<90%. The dominant processes driving the evolution of the core binary fraction are mass segregation of binaries into the cluster core and preferential destruction of binaries there. On a global scale, these effects and the preferential tidal stripping of single stars tend to roughly balance, leading to overall cluster binary fractions that are roughly constant with time. Our findings suggest that the current hard binary fraction near the half-mass radius is a good indicator of the hard primordial binary fraction. However, the relationship between the true binary fraction and the fraction of main-sequence stars in binaries (which is typically what observers measure) is nonlinear and rather complicated. We also consider the importance of soft binaries, which not only modify the evolution of the binary fraction, but can also drastically change the evolution of the cluster as a whole. Finally, we briefly describe the recent addition of single and binary stellar evolution to our cluster evolution code.
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
2012-01-01
This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....
Rate-adaptive BCH codes for distributed source coding
DEFF Research Database (Denmark)
Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren
2013-01-01
This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...
Binary translation using peephole translation rules
Bansal, Sorav; Aiken, Alex
2010-05-04
An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.
1982-05-01
be released after the completion of extensive checking, and a revised user’s manual will then be issued . 84 9. . . . . . . . 7- 7-.. . . . . . II...US Amy Foreign Science & Tech CtrATTN: FCT ATTN: DRXST-SD ATTN: FCTX ATTN: FCTT, G. Ganong US Army Mat Cmd Proj Mngr for Nuc Munitions ATTN: FCTT, W...ATTN: OP 225 ATTN: Code 8440, G. O’Hara ATTN: OP 03EG ATTN: Code 6380 ATTN: OP 21 ATTN: Code 8100 ATTN: NOP 951, ASW Div ATTN: Code 8301 ATTN: OP
Cellular Automata Rules and Linear Numbers
Nayak, Birendra Kumar; Sahoo, Sudhakar; Biswal, Sagarika
2012-01-01
In this paper, linear Cellular Automta (CA) rules are recursively generated using a binary tree rooted at "0". Some mathematical results on linear as well as non-linear CA rules are derived. Integers associated with linear CA rules are defined as linear numbers and the properties of these linear numbers are studied.
Optimized reversible binary-coded decimal adders
DEFF Research Database (Denmark)
Thomsen, Michael Kirkedal; Glück, Robert
2008-01-01
their design. The optimized 1-decimal BCD full-adder, a 13 × 13 reversible logic circuit, is faster, and has lower circuit cost and less garbage bits. It can be used to build a fast reversible m-decimal BCD full-adder that has a delay of only m + 17 low-power reversible CMOS gates. For a 32-decimal (128-bit...... in reversible logic design by drastically reducing the number of garbage bits. Specialized designs benefit from support by reversible logic synthesis. All circuit components required for optimizing the original design could also be synthesized successfully by an implementation of an existing synthesis algorithm...
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
Energy Technology Data Exchange (ETDEWEB)
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Combining binary classifiers to improve tree species discrimination at leaf level
CSIR Research Space (South Africa)
Dastile, X
2012-11-01
Full Text Available , direct 7-class prediction results in high misclassification rates. We therefore construct binary classifiers for all possible binary classification problems and combine them using Error Correcting Output Codes (ECOC) to form a 7-class predictor. ECOC...
Optimizing the ATLAS code with different profilers
Kama, S; The ATLAS collaboration
2013-01-01
After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 4M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like PIN, PAPI, and GOODA; as well as techniques such as library interposing. In this talk we will mainly focus on PIN tools and GOODA. PIN is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance...
Logistic chaotic maps for binary numbers generations
International Nuclear Information System (INIS)
Kanso, Ali; Smaoui, Nejib
2009-01-01
Two pseudorandom binary sequence generators, based on logistic chaotic maps intended for stream cipher applications, are proposed. The first is based on a single one-dimensional logistic map which exhibits random, noise-like properties at given certain parameter values, and the second is based on a combination of two logistic maps. The encryption step proposed in both algorithms consists of a simple bitwise XOR operation of the plaintext binary sequence with the keystream binary sequence to produce the ciphertext binary sequence. A threshold function is applied to convert the floating-point iterates into binary form. Experimental results show that the produced sequences possess high linear complexity and very good statistical properties. The systems are put forward for security evaluation by the cryptographic committees.
Bondi-Hoyle-Lyttleton Accretion onto Binaries
Antoni, Andrea; MacLeod, Morgan; Ramírez-Ruiz, Enrico
2018-01-01
Binary stars are not rare. While only close binary stars will eventually interact with one another, even the widest binary systems interact with their gaseous surroundings. The rates of accretion and the gaseous drag forces arising in these interactions are the key to understanding how these systems evolve. This poster examines accretion flows around a binary system moving supersonically through a background gas. We perform three-dimensional hydrodynamic simulations of Bondi-Hoyle-Lyttleton accretion using the adaptive mesh refinement code FLASH. We simulate a range of values of semi-major axis of the orbit relative to the gravitational focusing impact parameter of the pair. On large scales, gas is gravitationally focused by the center-of-mass of the binary, leading to dynamical friction drag and to the accretion of mass and momentum. On smaller scales, the orbital motion imprints itself on the gas. Notably, the magnitude and direction of the forces acting on the binary inherit this orbital dependence. The long-term evolution of the binary is determined by the timescales for accretion, slow down of the center-of-mass, and decay of the orbit. We use our simulations to measure these timescales and to establish a hierarchy between them. In general, our simulations indicate that binaries moving through gaseous media will slow down before the orbit decays.
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Eliciting Subjective Probabilities with Binary Lotteries
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd
2014-01-01
We evaluate a binary lottery procedure for inducing risk neutral behavior in a subjective belief elicitation task. Prior research has shown this procedure to robustly induce risk neutrality when subjects are given a single risk task defined over objective probabilities. Drawing a sample from...... the same subject population, we find evidence that the binary lottery procedure also induces linear utility in a subjective probability elicitation task using the Quadratic Scoring Rule. We also show that the binary lottery procedure can induce direct revelation of subjective probabilities in subjects...
Optimizing ATLAS code with different profilers
Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.
2014-06-01
After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.
CNN-aware Binary Map for General Semantic Segmentation
Ravanbakhsh, Mahdyar; Mousavi, Hossein; Nabi, Moin; Rastegari, Mohammad; Regazzoni, Carlo
2016-01-01
In this paper we introduce a novel method for general semantic segmentation that can benefit from general semantics of Convolutional Neural Network (CNN). Our segmentation proposes visually and semantically coherent image segments. We use binary encoding of CNN features to overcome the difficulty of the clustering on the high-dimensional CNN feature space. These binary codes are very robust against noise and non-semantic changes in the image. These binary encoding can be embedded into the CNN...
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
The Tap code - a code similar to Morse code for communication by tapping
Rafler, Stephan
2013-01-01
A code is presented for fast, easy and efficient communication over channels that allow only two signal types: a single sound (e.g. a knock), or no sound (i.e. silence). This is a true binary code while Morse code is a ternary code and does not work in such situations. Thus the presented code is more universal than Morse and can be used in much more situations. Additionally it is very tolerant to variations in signal strength or duration. The paper contains various ways in which the code can ...
P.H. Utomo (Putranto); R.H. Makarim (Rusydi)
2017-01-01
textabstractA Binary puzzle is a Sudoku-like puzzle with values in each cell taken from the set (Formula presented.). Let (Formula presented.) be an even integer, a solved binary puzzle is an (Formula presented.) binary array that satisfies the following conditions: (1) no three consecutive ones and
Binary Biometric Representation through Pairwise Adaptive Phase Quantization
Chen, C.; Veldhuis, Raymond N.J.
Extracting binary strings from real-valued biometric templates is a fundamental step in template compression and protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Quantization and coding is the straightforward way to extract binary representations
Eclipsing binaries in open clusters
DEFF Research Database (Denmark)
Southworth, John; Clausen, J.V.
2006-01-01
Stars: fundamental parameters - Stars : binaries : eclipsing - Stars: Binaries: spectroscopic - Open clusters and ass. : general Udgivelsesdato: 5 August......Stars: fundamental parameters - Stars : binaries : eclipsing - Stars: Binaries: spectroscopic - Open clusters and ass. : general Udgivelsesdato: 5 August...
BPASS predictions for binary black hole mergers
Eldridge, J. J.; Stanway, E. R.
2016-11-01
Using the Binary Population and Spectral Synthesis code, BPASS, we have calculated the rates, time-scales and mass distributions for binary black hole (BH) mergers as a function of metallicity. We consider these in the context of the recently reported first Laser Interferometer Gravitational-Wave Observatory (LIGO) event detection. We find that the event has a very low probability of arising from a stellar population with initial metallicity mass fraction above Z = 0.010 (Z ≳ 0.5 Z⊙). Binary BH merger events with the reported masses are most likely in populations below 0.008 (Z ≲ 0.4 Z⊙). Events of this kind can occur at all stellar population ages from 3 Myr up to the age of the Universe, but constitute only 0.1-0.4 per cent of binary BH mergers between metallicities of Z = 0.001 and 0.008. However at metallicity Z = 10-4, 26 per cent of binary BH mergers would be expected to have the reported masses. At this metallicity, the progenitor merger times can be close to ≈10 Gyr and rotationally mixed stars evolving through quasi-homogeneous evolution, due to mass transfer in a binary, dominate the rate. The masses inferred for the BHs in the binary progenitor of GW 150914 are amongst the most massive expected at anything but the lowest metallicities in our models. We discuss the implications of our analysis for the electromagnetic follow-up of future LIGO event detections.
Compact binary hashing for music retrieval
Seo, Jin S.
2014-03-01
With the huge volume of music clips available for protection, browsing, and indexing, there is an increased attention to retrieve the information contents of the music archives. Music-similarity computation is an essential building block for browsing, retrieval, and indexing of digital music archives. In practice, as the number of songs available for searching and indexing is increased, so the storage cost in retrieval systems is becoming a serious problem. This paper deals with the storage problem by extending the supervector concept with the binary hashing. We utilize the similarity-preserving binary embedding in generating a hash code from the supervector of each music clip. Especially we compare the performance of the various binary hashing methods for music retrieval tasks on the widely-used genre dataset and the in-house singer dataset. Through the evaluation, we find an effective way of generating hash codes for music similarity estimation which improves the retrieval performance.
Directory of Open Access Journals (Sweden)
Christopher M. Bentz
2014-03-01
Full Text Available We compare optical time domain reflectometry (OTDR techniques based on conventional single impulse, coding and linear frequency chirps concerning their signal to noise ratio (SNR enhancements by measurements in a passive optical network (PON with a maximum one-way attenuation of 36.6 dB. A total of six subscribers, each represented by a unique mirror pair with narrow reflection bandwidths, are installed within a distance of 14 m. The spatial resolution of the OTDR set-up is 3.0 m.
International Nuclear Information System (INIS)
Espinosa P, G.; Estrada P, C.E.; Nunez C, A.; Amador G, R.
2001-01-01
The computer code ANESLI-1 developed by the CNSNS and UAM-I, has the main goal of making stability analysis of nuclear reactors of the BWR type, more specifically, the reactors of the U1 and U2 of the CNLV. However it can be used for another kind of applications. Its capacity of real time simulator, allows the prediction of operational transients, and conditions of dynamic steady states. ANESLI-1 was developed under a modular scheme, which allows to extend or/and to improve its scope. The lineal stability analysis predicts the instabilities produced by the wave density phenomenon. (Author)
Binary recursive partitioning: background, methods, and application to psychology.
Merkle, Edgar C; Shaffer, Victoria A
2011-02-01
Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.
Binary palmprint representation for feature template protection
Mu, Meiru; Ruan, Qiuqi; Shao, X.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
2012-01-01
The major challenge of biometric template protection comes from the intraclass variations of biometric data. The helper data scheme aims to solve this problem by employing the Error Correction Codes (ECC). However, many reported biometric binary features from the same user reach bit error rate (BER)
Directory of Open Access Journals (Sweden)
A.K. Bhunia
2013-04-01
Full Text Available This paper deals with a deterministic inventory model developed for deteriorating items having two separate storage facilities (owned and rented warehouses due to limited capacity of the existing storage (owned warehouse with linear time dependent demand (increasing over a fixed finite time horizon. The model is formulated with infinite replenishment and the successive replenishment cycle lengths are in arithmetic progression. Partially backlogged shortages are allowed. The stocks of rented warehouse (RW are transported to the owned warehouse (OW in continuous release pattern. For this purpose, the model is formulated as a constrained non-linear mixed integer programming problem. For solving the problem, an advanced genetic algorithm (GA has been developed. This advanced GA is based on ranking selection, elitism, whole arithmetic crossover and non-uniform mutation dependent on the age of the population. Our objective is to determine the optimal replenishment number, lot-size of two-warehouses (OW and RW by maximizing the profit function. The model is illustrated with four numerical examples and sensitivity analyses of the optimal solution are performed with respect to different parameters.
Number of minimum-weight code words in a product code
Miller, R. L.
1978-01-01
Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.
Binary Arithmetic From Hariot (CA, 1600 A.D.) to the Computer Age.
Glaser, Anton
This history of binary arithmetic begins with details of Thomas Hariot's contribution and includes specific references to Hariot's manuscripts kept at the British Museum. A binary code developed by Sir Francis Bacon is discussed. Briefly mentioned are contributions to binary arithmetic made by Leibniz, Fontenelle, Gauss, Euler, Benzout, Barlow,…
Galois LCD Codes over Finite Fields
Liu, Xiusheng; Fan, Yun; Liu, Hualu
2017-01-01
In this paper, we study the complementary dual codes in more general setting (which are called Galois LCD codes) by a uniform method. A necessary and sufficient condition for linear codes to be Galois LCD codes is determined, and constacyclic codes to be Galois LCD codes are characterized. Some illustrative examples which constacyclic codes are Galois LCD MDS codes are provided as well. In particular, we study Hermitian LCD constacyclic codes. Finally, we present a construction of a class of ...
International Nuclear Information System (INIS)
Cardena R, A. R.; Vega R, J. L.; Apaza V, D. G.
2015-10-01
The progress in cancer treatment systems in heterogeneities of human body has had obstacles by the lack of a suitable experimental model test. The only option is to develop simulated theoretical models that have the same properties in interfaces similar to human tissues, to know the radiation behavior in the interaction with these materials. In this paper we used the Monte Carlo method by Penelope code based solely on studies for the cancer treatment as well as for the calibration of beams and their various interactions in mannequins. This paper also aims the construction, simulation and characterization of an equivalent object to the tissues of the human body with various heterogeneities, we will later use to control and plan experientially doses supplied in treating tumors in radiotherapy. To fulfill the objective we study the ionizing radiation and the various processes occurring in the interaction with matter; understanding that to calculate the dose deposited in tissues interfaces (percentage depth dose) must be taken into consideration aspects such as the deposited energy, irradiation fields, density, thickness, tissue sensitivity and other items. (Author)
Eclipsing Binary B-Star Mass Determinations
Townsend, Amanda; Eikenberry, Stephen S.
2016-01-01
B-stars in binary pairs provide a laboratory for key astrophysical measurements of massive stars, including key insights for the formation of compact objects (neutron stars and black holes). In their paper, Martayan et al (2004) find 23 Be binary star pairs in NGC2004 in the Large Magellanic Cloud, five of which are both eclipsing and spectroscopic binaries with archival data from VLT-Giraffe and photometric data from MACHO. By using the Wilson eclipsing binary code (e.g., Wilson, 1971), we can determine preliminary stellar masses of the binary components. We present the first results from this analysis. This study also serves as proof-of-concept for future observations with the Photonic Synthesis Telescope Array (Eikenberry et al., in prep) that we are currently building for low-cost, precision spectroscopic observations. With higher resolution and dedicated time for observations, we can follow-up observations of these Be stars as well as Be/X-ray binaries, for improved mass measurements of neutron stars and black holes and better constraints on their origin/formation.
Sahade, Jorge; Ter Haar, D
1978-01-01
Interacting Binary Stars deals with the development, ideas, and problems in the study of interacting binary stars. The book consolidates the information that is scattered over many publications and papers and gives an account of important discoveries with relevant historical background. Chapters are devoted to the presentation and discussion of the different facets of the field, such as historical account of the development in the field of study of binary stars; the Roche equipotential surfaces; methods and techniques in space astronomy; and enumeration of binary star systems that are studied
Binary Masking & Speech Intelligibility
DEFF Research Database (Denmark)
Boldt, Jesper
The purpose of this thesis is to examine how binary masking can be used to increase intelligibility in situations where hearing impaired listeners have difficulties understanding what is being said. The major part of the experiments carried out in this thesis can be categorized as either experime......The purpose of this thesis is to examine how binary masking can be used to increase intelligibility in situations where hearing impaired listeners have difficulties understanding what is being said. The major part of the experiments carried out in this thesis can be categorized as either...... experiments under ideal conditions or as experiments under more realistic conditions useful for real-life applications such as hearing aids. In the experiments under ideal conditions, the previously defined ideal binary mask is evaluated using hearing impaired listeners, and a novel binary mask -- the target...... binary mask -- is introduced. The target binary mask shows the same substantial increase in intelligibility as the ideal binary mask and is proposed as a new reference for binary masking. In the category of real-life applications, two new methods are proposed: a method for estimation of the ideal binary...
Algebraic and stochastic coding theory
Kythe, Dave K
2012-01-01
Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.
DEFF Research Database (Denmark)
Keiding, Hans; Peleg, Bezalel
2006-01-01
effectivity rule is regular if it is the effectivity rule of some regular binary SCR. We characterize completely the family of regular binary effectivity rules. Quite surprisingly, intrinsically defined von Neumann-Morgenstern solutions play an important role in this characterization...
Christova-Zdravkova, C.G.
2005-01-01
Binary crystals are crystals composed of two types of particles having different properties like size, mass density, charge etc. In this thesis several new approaches to make binary crystals of colloidal particles that differ in size, material and charge are reported We found a variety of crystal
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
New constructions of MDS codes with complementary duals
Chen, Bocong; Liu, Hongwei
2017-01-01
Linear complementary-dual (LCD for short) codes are linear codes that intersect with their duals trivially. LCD codes have been used in certain communication systems. It is recently found that LCD codes can be applied in cryptography. This application of LCD codes renewed the interest in the construction of LCD codes having a large minimum distance. MDS codes are optimal in the sense that the minimum distance cannot be improved for given length and code size. Constructing LCD MDS codes is thu...
Formation and Evolution of Binary Systems Containing Collapsed Stars
Rappaport, Saul; West, Donald (Technical Monitor)
2003-01-01
This research includes theoretical studies of the formation and evolution of five types of interacting binary systems. Our main focus has been on developing a number of comprehensive population synthesis codes to study the following types of binary systems: (i) cataclysmic variables (#3, #8, #12, #15), (ii) low- and intermediate-mass X-ray binaries (#13, #20, #21), (iii) high-mass X-ray binaries (#14, #17, #22), (iv) recycled binary millisecond pulsars in globular clusters (#5, #10, #ll), and (v) planetary nebulae which form in interacting binaries (#6, #9). The numbers in parentheses refer to papers published or in preparation that are listed in this paper. These codes take a new unified approach to population synthesis studies. The first step involves a Monte Carlo selection of the primordial binaries, including the constituent masses, and orbital separations and eccentricities. Next, a variety of analytic methods are used to evolve the primary star to the point where either a dynamical episode of mass transfer to the secondary occurs (the common envelope phase), or the system evolves down an alternate path. If the residual core of the primary is greater than 2.5 solar mass, it will evolve to Fe core collapse and the production of a neutron star and a supernova explosion. In the case of systems involving neutron stars, a kick velocity is chosen randomly from an appropriate distribution and added to the orbital dynamics which determine the state of the binary system after the supernova explosion. In the third step, all binaries which commence stable mass transfer from the donor star (the original secondary in the binary system) to the compact object, are followed with a detailed binary evolution code. Finally, we include all the relevant dynamics of the binary system. For example, in the case of LMXBs, the binary system, with its recoil velocity from the supernova explosion, is followed in time through its path in the Galactic potential. For our globular cluster
Graphical User Interface and Microprocessor Control Enhancement of a Pseudorandom Code Generator
National Research Council Canada - National Science Library
Kos, John
1999-01-01
.... The ability to quickly and easily produce various codes such as maximal length codes, Gold codes, Jet Propulsion Laboratory ranging codes, syncopated codes, and non-linear codes in a laboratory environment is essential...
Rotation invariant deep binary hashing for fast image retrieval
Dai, Lai; Liu, Jianming; Jiang, Aiwen
2017-07-01
In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.
Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification
Directory of Open Access Journals (Sweden)
Yidong Tang
2016-01-01
Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.
National Aeronautics and Space Administration — The data set lists orbital and physical properties for well-observed or suspected binary/multiple minor planets including the Pluto system, compiled from the...
International Nuclear Information System (INIS)
Larsson-Leander, G.
1979-01-01
Studies of close binary stars are being persued more vigorously than ever, with about 3000 research papers and notes pertaining to the field being published during the triennium 1976-1978. Many major advances and spectacular discoveries were made, mostly due to increased observational efficiency and precision, especially in the X-ray, radio, and ultraviolet domains. Progress reports are presented in the following areas: observational techniques, methods of analyzing light curves, observational data, physical data, structure and models of close binaries, statistical investigations, and origin and evolution of close binaries. Reports from the Coordinates Programs Committee, the Committee for Extra-Terrestrial Observations and the Working Group on RS CVn binaries are included. (Auth./C.F.)
International Nuclear Information System (INIS)
Petrov, D.A.
1986-01-01
Conditions for thermodynamical equilibrium in binary and ternary systems are considered. Main types of binary and ternary system phase diagrams are sequently constructed on the basis of general regularities on the character of transition from one equilibria to others. New statements on equilibrium line direction in the diagram triple points and their isothermal cross sections are developed. New represenations on equilibria in case of monovariant curve minimum and maximum on three-phase equilibrium formation in ternary system are introduced
Binary and Millisecond Pulsars
Lorimer, D. R.
2005-01-01
We review the main properties, demographics and applications of binary and millisecond radio pulsars. Our knowledge of these exciting objects has greatly increased in recent years, mainly due to successful surveys which have brought the known pulsar population to over 1800. There are now 83 binary and millisecond pulsars associated with the disk of our Galaxy, and a further 140 pulsars in 26 of the Galactic globular clusters. Recent highlights include the discovery of the young relativistic b...
Astrophysics of white dwarf binaries
Nelemans, G.A.
2006-01-01
White dwarf binaries are the most common compact binaries in the Universe and are especially important for low-frequency gravitational wave detectors such as LISA. There are a number of open questions about binary evolution and the Galactic population of white dwarf binaries that can be solved using
Evolution of cataclysmic binaries
International Nuclear Information System (INIS)
Paczynski, B.
1981-01-01
Cataclysmic binaries with short orbital periods have low mass secondary components. Their nuclear time scale is too long to be of evolutionary significance. Angular momentum loss from the binary drives the mass transfer between the two components. As long as the characteristic time scale is compared with the Kelvin-Helmholtz time scale of the mass losing secondary the star remains close to the main sequence, and the binary period decreases with time. If angular momentum loss is due to gravitational radiation then the mass transfer time scale becomes comparable to the Kelvin-Helmoltz time scale when the secondary's mass decreases to 0.12 Msub(sun), and the binary period is reduced to 80 minutes. Later, the mass losing secondary departs from the main sequence and gradually becomes degenerate. Now the orbital period increases with time. The observed lower limit to the orbital periods of hydrogen rich cataclysmic binaries implies that gravitational radiation is the main driving force for the evolution of those systems. It is shown that binaries emerging from a common envelope phase of evolution are well detached. They have to lose additional angular momentum to become semidetached cataclysmic variables. (author)
Contamination of RR Lyrae stars from Binary Evolution Pulsators
Karczmarek, Paulina; Pietrzyński, Grzegorz; Belczyński, Krzysztof; Stępień, Kazimierz; Wiktorowicz, Grzegorz; Iłkiewicz, Krystian
2016-06-01
Binary Evolution Pulsator (BEP) is an extremely low-mass member of a binary system, which pulsates as a result of a former mass transfer to its companion. BEP mimics RR Lyrae-type pulsations but has different internal structure and evolution history. We present possible evolution channels to produce BEPs, and evaluate the contamination value, i.e. how many objects classified as RR Lyrae stars can be undetected BEPs. In this analysis we use population synthesis code StarTrack.
Linear Algebra and Linear Models
Indian Academy of Sciences (India)
Linear Algebra and Linear. Models. Kalyan Das. Linear Algebra and linear Models. (2nd Edn) by R P Bapat. Hindustan Book Agency, 1999 pp.xiii+180, Price: Rs.135/-. This monograph provides an introduction to the basic aspects of the theory oflinear estima- tion and that of testing linear hypotheses. The primary objective ...
Energy Technology Data Exchange (ETDEWEB)
Leonard, P.J.T.; Duncan, M.J.
1988-07-01
The production of runaway stars by the dynamical-ejection mechanism in an open star cluster containing 50 percent binaries of equal mass and energy is investigated theoretically by means of numerical simulations using the NBODY5 code of Aarseth (1985). The construction of the models is outlined, and the results are presented graphically and characterized in detail. It is shown that binary-binary collisions capable of producing runaways can occur (via formation and disruption, with some stellar collisions, of hierarchical double binaries) in clusters of relatively low density (e.g., pc-sized clusters of O or B stars). The frequency of binaries in the runaway population is found to vary between 0 and 50 percent, with the majority of runaways being unevolved early-type stars. 38 references.
``Supermassive Black-Hole Binaries in Merging Cusps''
Milosavljevic, M.; Merritt, D.
2000-12-01
We present N-body simulations of the formation and evolution of supermassive black-hole binaries in galactic nuclei. Initial conditions are drawn from a tree-code simulation of the merger of two spherical galaxies containing central point masses and ρ ~ r-2 central density cusps. Once the two black holes form a bound pair at the center of the merged galaxies, the evolution is continued using the Aarseth/Spurzem parallel tree code NBODY6++ at much higher resolution. Immediately following the formation of a hard black-hole binary, the density cusp of the merged galaxies is nearly homologous to the cusps in the initial galaxies. However the central density decreases rapidly as the binary black hole ejects stars which pass near to it, reducing the slope of the cusp from ~ r-2 to ~ r-1. When the distance between the black holes becomes comparable to the average stellar separation in the cusp, the binary begins to wander about the center while engaging in hard encounters with stars on radial orbits that are ejected at high speed. Ejection induces further shrinking of the binary at a decreasing rate. We discuss the dynamics of black hole binaries in the limit of large N, appropriate to real galactic nuclei, and discuss the possibility that supermassive black hole binaries can survive over cosmological times.
Pablos Martin, X; Deltenre, P; Hoonhorst, I; Markessis, E; Rossion, B; Colin, C
2007-12-01
Rhythm perception appears to be non-linear as human subjects are better at discriminating, categorizing and reproducing rhythms containing binary vs non-binary (e.a. 1:2 vs 1:3) as well as metrical vs non-metrical (e.a. 1:2 vs 1:2.5) interval ratios. This study examined the representation of binary and non-binary interval ratios within the sensory memory, thus yielding a truly sensory, pre-motor, attention-independent neural representation of rhythmical intervals. Five interval ratios, one binary, flanked by four non-binary ones, were compared on the basis of the MMN they evoked when contrasted against a common standard interval. For all five intervals, the larger the contrast was, the larger the MMN amplitude was. The binary interval evoked a significantly much shorter (by at least 23 ms) MMN latency than the other intervals, whereas no latency difference was observed between the four non-binary intervals. These results show that the privileged perceptual status of binary rhythmical intervals is already present in the sensory representations found in echoic memory at an early, automatic, pre-perceptual and pre-motor level. MMN latency can be used to study rhythm perception at a truly sensory level, without any contribution from the motor system.
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Orbital motion in pre-main sequence binaries
Energy Technology Data Exchange (ETDEWEB)
Schaefer, G. H. [The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023 (United States); Prato, L. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Simon, M. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Patience, J., E-mail: schaefer@chara-array.org [Astrophysics Group, School of Physics, University of Exeter, Exeter, EX4 4QL (United Kingdom)
2014-06-01
We present results from our ongoing program to map the visual orbits of pre-main sequence (PMS) binaries in the Taurus star forming region using adaptive optics imaging at the Keck Observatory. We combine our results with measurements reported in the literature to analyze the orbital motion for each binary. We present preliminary orbits for DF Tau, T Tau S, ZZ Tau, and the Pleiades binary HBC 351. Seven additional binaries show curvature in their relative motion. Currently, we can place lower limits on the orbital periods for these systems; full solutions will be possible with more orbital coverage. Five other binaries show motion that is indistinguishable from linear motion. We suspect that these systems are bound and might show curvature with additional measurements in the future. The observations reported herein lay critical groundwork toward the goal of measuring precise masses for low-mass PMS stars.
Directory of Open Access Journals (Sweden)
Joshua A. Faber
2012-07-01
Full Text Available We review the current status of studies of the coalescence of binary neutron star systems. We begin with a discussion of the formation channels of merging binaries and we discuss the most recent theoretical predictions for merger rates. Next, we turn to the quasi-equilibrium formalisms that are used to study binaries prior to the merger phase and to generate initial data for fully dynamical simulations. The quasi-equilibrium approximation has played a key role in developing our understanding of the physics of binary coalescence and, in particular, of the orbital instability processes that can drive binaries to merger at the end of their lifetimes. We then turn to the numerical techniques used in dynamical simulations, including relativistic formalisms, (magneto-hydrodynamics, gravitational-wave extraction techniques, and nuclear microphysics treatments. This is followed by a summary of the simulations performed across the field to date, including the most recent results from both fully relativistic and microphysically detailed simulations. Finally, we discuss the likely directions for the field as we transition from the first to the second generation of gravitational-wave interferometers and while supercomputers reach the petascale frontier.
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Moruz, Gabriel
2006-01-01
It is well-known that to minimize the number of comparisons a binary search tree should be perfectly balanced. Previous work has shown that a dominating factor over the running time for a search is the number of cache faults performed, and that an appropriate memory layout of a binary search tree...... can reduce the number of cache faults by several hundred percent. Motivated by the fact that during a search branching to the left or right at a node does not necessarily have the same cost, e.g. because of branch prediction schemes, we in this paper study the class of skewed binary search trees....... For all nodes in a skewed binary search tree the ratio between the size of the left subtree and the size of the tree is a fixed constant (a ratio of 1/2 gives perfect balanced trees). In this paper we present an experimental study of various memory layouts of static skewed binary search trees, where each...
Darkunde, Nitin S.; Patil, Arunkumar R.
2018-01-01
The main aim of this paper is to study $LCD$ codes. Linear code with complementary dual($LCD$) are those codes which have their intersection with their dual code as $\\{0\\}$. In this paper we will give rather alternative proof of Massey's theorem\\cite{8}, which is one of the most important characterization of $LCD$ codes. Let $LCD[n,k]_3$ denote the maximum of possible values of $d$ among $[n,k,d]$ ternary $LCD$ codes. In \\cite{4}, authors have given upper bound on $LCD[n,k]_2$ and extended th...
Design of convolutional tornado code
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
International Nuclear Information System (INIS)
Tutukov, A.V.; Fedorova, A.V.; Yungel'son, L.R.
1982-01-01
The conditions of mass exchange in close binary systems with masses of components less or equal to one solar mass have been analysed for the case, when the system radiates gravitational waves. It has been shown that the mass exchange rate depends in a certain way on the mass ratio of components and on the mass of component that fills its inner critical lobe. The comparison of observed periods, masses of contact components, and mass exchange rates of observed cataclysmic binaries have led to the conclusion that the evolution of close binaries WZ Sge, OY Car, Z Cha, TT Ari, 2A 0311-227, and G 61-29 may be driven by the emission of gravitational waves [ru
International Nuclear Information System (INIS)
Tutukov, A.V.; Fedorova, A.V.; Yungel'son, L.R.
1982-01-01
The circumstances of mass exchange in close binary systems whose components have a mass < or approx. =1 M/sub sun/ are analyzed for the case where the system is losing orbital angular momentum by radiation of gravitational waves. The mass exchange rate will depend on the mass ratio of the components and on the mass of the component that is overfilling its critical Roche lobe. A comparison of the observed orbital periods, masses of the components losing material, and mass exchange rates against the theoretical values for cataclysmic binaries indicates that the evolution of the close binaries WZ Sge, OY Car, Z Cha, TT Ari, 2A 0311-227, and G61-29 may be driven by the emission of gravitational waves
Binary and Millisecond Pulsars
Directory of Open Access Journals (Sweden)
Lorimer Duncan R.
2008-11-01
Full Text Available We review the main properties, demographics and applications of binary and millisecond radio pulsars. Our knowledge of these exciting objects has greatly increased in recent years, mainly due to successful surveys which have brought the known pulsar population to over 1800. There are now 83 binary and millisecond pulsars associated with the disk of our Galaxy, and a further 140 pulsars in 26 of the Galactic globular clusters. Recent highlights include the discovery of the young relativistic binary system PSR J1906+0746, a rejuvination in globular cluster pulsar research including growing numbers of pulsars with masses in excess of 1.5M_⊙, a precise measurement of relativistic spin precession in the double pulsar system and a Galactic millisecond pulsar in an eccentric (e = 0.44 orbit around an unevolved companion.
Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding
Directory of Open Access Journals (Sweden)
Ying Chen
2014-05-01
Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.
Vanbeveren, D., Van Rensbergen, W., De Loore, C.
Massive stars are distributed all over the upper part of the Hertzsprung-Russell diagram according to their subsequent phases of stellar evolution from main sequence to supernova. Massive stars may either be single or they may be a component of a close binary. The observed single star/binary frequency is known only in a small part of the Galaxy. Whether this holds for the whole galaxy or for the whole cosmos is questionable and needs many more high quality observations. Massive star evolution depends critically on mass loss by stellar wind and this stellar wind mass loss may change dramatically when stars evolve from one phase to another. We start the book with a critical discussion of observations of the different types of massive stars, observations that are of fundamental importance in relation to stellar evolution, with special emphasis on mass loss by stellar wind. We update our knowledge of the physics that models the structure and evolution of massive single stars and we present new calculations. The conclusions resulting from a comparison between these calculations and observations are then used to study the evolution of massive binaries. This book provides our current knowledge of a great variety of massive binaries, and hence of a great variety of evolutionary phases. A large number of case studies illustrates the existence of these phases. Finally, we present the results of massive star population number synthesis, including the effect of binaries. The results indicate that neglecting them leads to a conclusion which may be far from reality. This book is written for researchers in massive star evolution. We hope that, after reading this book, university-level astrophysics students will become fascinated by the exciting world of the `Brightest Binaries'.
International Nuclear Information System (INIS)
Mikkola, S.
1983-01-01
Gravitational encounters of pairs of binaries have been studied numerically. Various cross-sections have been calculated for qualitative final results of the interaction and for energy transfer between the binding energy and the centre of mass kinetic energy. The distribution of the kinetic energies, resulting from the gravitational collision, were found to be virtually independent of the impact velocity in the case of collision of hard binaries. It was found that one out of five collisions, which are not simple fly-by's, leads to the formation of a stable three-body system. (author)
Binary and Millisecond Pulsars
Directory of Open Access Journals (Sweden)
Duncan R. Lorimer
1998-09-01
Full Text Available Our knowledge of binary and millisecond pulsars has greatly increased in recent years. This is largely due to the success of large-area surveys which have brought the known population of such systems in the Galactic disk to around 50. As well as being interesting as a population of astronomical sources, many pulsars turn out to be superb celestial clocks. In this review we summarise the main properties of binary and millisecond pulsars and highlight some of their applications to relativistic astrophysics.
Binary and Millisecond Pulsars
Directory of Open Access Journals (Sweden)
Lorimer Duncan R.
2005-11-01
Full Text Available We review the main properties, demographics and applications of binary and millisecond radio pulsars. Our knowledge of these exciting objects has greatly increased in recent years, mainly due to successful surveys which have brought the known pulsar population to over 1700. There are now 80 binary and millisecond pulsars associated with the disk of our Galaxy, and a further 103 pulsars in 24 of the Galactic globular clusters. Recent highlights have been the discovery of the first ever double pulsar system and a recent flurry of discoveries in globular clusters, in particular Terzan 5.
Asteroseismic modelling of the Binary HD 176465
Directory of Open Access Journals (Sweden)
Nsamba B.
2017-01-01
Full Text Available The detection and analysis of oscillations in binary star systems is critical in understanding stellar structure and evolution. This is partly because such systems have the same initial chemical composition and age. Solar-like oscillations have been detected by Kepler in both components of the asteroseismic binary HD 176465. We present an independent modelling of each star in this binary system. Stellar models generated using MESA (Modules for Experiments in Stellar Astrophysics were fitted to both the observed individual frequencies and complementary spectroscopic parameters. The individual theoretical oscillation frequencies for the corresponding stellar models were obtained using GYRE as the pulsation code. A Bayesian approach was applied to find the probability distribution functions of the stellar parameters using AIMS (Asteroseismic Inference on a Massive Scale as the optimisation code. The ages of HD 176465 A and HD 176465 B were found to be 2.81 ± 0.48 Gyr and 2.52 ± 0.80 Gyr, respectively. These results are in agreement when compared to previous studies carried out using other asteroseismic modelling techniques and gyrochronology.
International Nuclear Information System (INIS)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
Mohit Tyagi,; Kavita Khare
2011-01-01
This paper provides the details for novel adder/subtractor arithmetic unit that combines Binary, Binary Code Decimal (BCD) and single precision Binary floating point operations in a single structure. The unit is able to perform effective addition-subtraction operations on unsigned, sign-magnitude, and various complement representations. The design is runtime reconfigurable or can be implemented in ASIC as a runtime configurable unit and maximum utilization of hardware resource are the feature...
International Nuclear Information System (INIS)
Pringle, J.E.; Wade, R.A.
1985-01-01
This book reviews the theoretical and observational knowledge of interacting binary stars. The topics discussed embrace the following features of these objects: their orbits, evolution, mass transfer, angular momentum losses, X-ray emission, eclipses, variability, and other related phenomena. (U.K.)
Equational binary decision diagrams
J.F. Groote (Jan Friso); J.C. van de Pol (Jaco)
2000-01-01
textabstractWe incorporate equations in binary decision diagrams (BDD). The resulting objects are called EQ-BDDs. A straightforward notion of ordered EQ-BDDs (EQ-OBDD) is defined, and it is proved that each EQ-BDD is logically equivalent to an EQ-OBDD. Moreover, on EQ-OBDDs satisfiability and
Tcheng, Ping
1989-01-01
Binary resistors in series tailored to precise value of resistance. Desired value of resistance obtained by cutting appropriate traces across resistors. Multibit, binary-based, adjustable resistor with high resolution used in many applications where precise resistance required.
Directory of Open Access Journals (Sweden)
Fabio Burderi
2007-05-01
Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.
Introduction to coding and information theory
Roman, Steven
1997-01-01
This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.
List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes
DEFF Research Database (Denmark)
Hernando, Fernando; Høholdt, Tom; Ruano, Diego
2012-01-01
A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list de...
DEFF Research Database (Denmark)
Ejsing-Duun, Stine; Hansbøl, Mikala
Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class......Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class...
The maximum number of minimal codewords in long codes
DEFF Research Database (Denmark)
Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.
2013-01-01
Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981...
Wijers, R.A.M.J.
1996-01-01
Introduction Distinguishing neutron stars and black holes Optical companions and dynamical masses X-ray signatures of the nature of a compact object Structure and evolution of black-hole binaries High-mass black-hole binaries Low-mass black-hole binaries Low-mass black holes Formation of black holes
Discovery of a Faint Eclipsing Binary GSC 02265-01456 DF Guo1, K ...
Indian Academy of Sciences (India)
Abstract. When observing the transiting extrasolar planets, we found a new eclipsing binary named GSC 02265-01456. The V and Rc obser- vations were carried out for this binary. The photometric light curves of the two bands were simultaneously analyzed using the W–D code. The solutions show that GSC 02265-01456 ...
Chen, C.; Veldhuis, Raymond N.J.
Extracting binary strings from real-valued biometric templates is a fundamental step in template compression and protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch and helper data systems. Quantization and coding are the straightforward way to extract binary representations
Device for logarithmic representation of binary numbers in analog form
International Nuclear Information System (INIS)
Georgiev, A.; Zhuravlev, N.I.; Zinov, V.G.; Salamatin, A.V.
1986-01-01
A logarithmic converter is described in which the mantissa of the logarithm is obtained from the values of several more-significant bits of a binary number with the aid of a table stored in read-only memory. The obtained codes are then put into analog form by a digital-analog converter. The error for 16-bit numbers is 0.17%
Analysis and Design of Binary Message-Passing Decoders
DEFF Research Database (Denmark)
Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard
2012-01-01
Binary message-passing decoders for low-density parity-check (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the L-value domain. A hard decision channel results i...
Binary gabor statistical features for palmprint template protection
Mu, Meiru; Ruan, Qiuqi; Shao, X.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
2012-01-01
The biometric template protection system requires a highquality biometric channel and a well-designed error correction code (ECC). Due to the intra-class variations of biometric data, an efficient fixed-length binary feature extractor is required to provide a high-quality biometric channel so that
EXIT Chart Analysis of Binary Message-Passing Decoders
DEFF Research Database (Denmark)
Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard
2007-01-01
Binary message-passing decoders for LDPC codes are analyzed using EXIT charts. For the analysis, the variable node decoder performs all computations in the L-value domain. For the special case of a hard decision channel, this leads to the well know Gallager B algorithm, while the analysis can...
Discovery of a Faint Eclipsing Binary GSC 02265-01456
Indian Academy of Sciences (India)
2016-01-27
Jan 27, 2016 ... ... of the two bands were simultaneously analyzed using the W–D code. The solutions show that GSC 02265-01456 is an extremely low mass ratio ( = 0.087) overcontact binary system with a contact degree of = 82.5%. The difference between the two maxima of the light curve can be explained by a dark ...
International Nuclear Information System (INIS)
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
VanderPlas, J. T.; Connolly, A. J.
2010-10-01
DimReduce is a C++ package for performing nonlinear dimensionality reduction of very large datasets with Locally Linear Embedding (LLE) and its variants. DimReduce is built for speed, using the optimized linear algebra packages BLAS, LAPACK, and ARPACK. Because of the need for storing very large matrices (1000 by 10000, for our SDSS LLE work), DimReduce is designed to use binary FITS files as inputs and outputs. This means that using the code is a bit more cumbersome. For smaller-scale LLE, where speed of computation is not as much of an issue, the Modular Data Processing toolkit may be a better choice. It is a python toolkit with some LLE functionality, which VanderPlas contributed. This code has been rewritten and included in scikit-learn and an improved version is included in http://mmp2.github.io/megaman/
Bokhari, Shahid H.; Crockett, Thomas W.; Nicol, David M.
1993-01-01
Binary dissection is widely used to partition non-uniform domains over parallel computers. This algorithm does not consider the perimeter, surface area, or aspect ratio of the regions being generated and can yield decompositions that have poor communication to computation ratio. Parametric Binary Dissection (PBD) is a new algorithm in which each cut is chosen to minimize load + lambda x(shape). In a 2 (or 3) dimensional problem, load is the amount of computation to be performed in a subregion and shape could refer to the perimeter (respectively surface) of that subregion. Shape is a measure of communication overhead and the parameter permits us to trade off load imbalance against communication overhead. When A is zero, the algorithm reduces to plain binary dissection. This algorithm can be used to partition graphs embedded in 2 or 3-d. Load is the number of nodes in a subregion, shape the number of edges that leave that subregion, and lambda the ratio of time to communicate over an edge to the time to compute at a node. An algorithm is presented that finds the depth d parametric dissection of an embedded graph with n vertices and e edges in O(max(n log n, de)) time, which is an improvement over the O(dn log n) time of plain binary dissection. Parallel versions of this algorithm are also presented; the best of these requires O((n/p) log(sup 3)p) time on a p processor hypercube, assuming graphs of bounded degree. How PBD is applied to 3-d unstructured meshes and yields partitions that are better than those obtained by plain dissection is described. Its application to the color image quantization problem is also discussed, in which samples in a high-resolution color space are mapped onto a lower resolution space in a way that minimizes the color error.
Binary Masking & Speech Intelligibility
Boldt, Jesper
2010-01-01
The purpose of this thesis is to examine how binary masking can be used to increase intelligibility in situations where hearing impaired listeners have difficulties understanding what is being said. The major part of the experiments carried out in this thesis can be categorized as either experiments under ideal conditions or as experiments under more realistic conditions useful for real-life applications such as hearing aids. In the experiments under ideal conditions, the previously defined i...
Eclipsing binary stars modeling and analysis
Kallrath, Josef
1999-01-01
This book focuses on the formulation of mathematical models for the light curves of eclipsing binary stars, and on the algorithms for generating such models Since information gained from binary systems provides much of what we know of the masses, luminosities, and radii of stars, such models are acquiring increasing importance in studies of stellar structure and evolution As in other areas of science, the computer revolution has given many astronomers tools that previously only specialists could use; anyone with access to a set of data can now expect to be able to model it This book will provide astronomers, both amateur and professional, with a guide for - specifying an astrophysical model for a set of observations - selecting an algorithm to determine the parameters of the model - estimating the errors of the parameters It is written for readers with knowledge of basic calculus and linear algebra; appendices cover mathematical details on such matters as optimization, coordinate systems, and specific models ...
Dynamic Binary Modification Tools, Techniques and Applications
Hazelwood, Kim
2011-01-01
Dynamic binary modification tools form a software layer between a running application and the underlying operating system, providing the powerful opportunity to inspect and potentially modify every user-level guest application instruction that executes. Toolkits built upon this technology have enabled computer architects to build powerful simulators and emulators for design-space exploration, compiler writers to analyze and debug the code generated by their compilers, software developers to fully explore the features, bottlenecks, and performance of their software, and even end-users to extend
Robust Reed Solomon Coded MPSK Modulation
Directory of Open Access Journals (Sweden)
Emir M. Husni
2014-10-01
Full Text Available In this paper, construction of partitioned Reed Solomon coded modulation (RSCM, which is robust for the additive white Gaussian noise channel and a Rayleigh fading channel, is investigated. By matching configuration of component codes with the channel characteristics, it is shown that this system is robust for the Gaussian and a Rayleigh fading channel. This approach is compared with non-partitioned RSCM, a Reed Solomon code combined with an MPSK signal set using Gray mapping; and block coded MPSK modulation using binary codes, Reed Muller codes. All codes use hard decision decoding algorithm. Simulation results for these schemes show that RSCM based on set partitioning performs better than those that are not based on set partitioning and Reed Muller Coded Modulation across a wide range of conditions. The novel idea here is that in the receiver, we use a rotated 2^(m+1-PSK detector if the transmitter uses a 2^m-PSK modulator.
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
DEFF Research Database (Denmark)
2015-01-01
Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....
Adaptable Value-Set Analysis for Low-Level Code
Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.
2012-01-01
This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
GridRun: A lightweight packaging and execution environment forcompact, multi-architecture binaries
Energy Technology Data Exchange (ETDEWEB)
Shalf, John; Goodale, Tom
2004-02-01
GridRun offers a very simple set of tools for creating and executing multi-platform binary executables. These ''fat-binaries'' archive native machine code into compact packages that are typically a fraction the size of the original binary images they store, enabling efficient staging of executables for heterogeneous parallel jobs. GridRun interoperates with existing distributed job launchers/managers like Condor and the Globus GRAM to greatly simplify the logic required launching native binary applications in distributed heterogeneous environments.
Optical three-step binary-logic-gate-based MSD arithmetic
Fyath, R. S.; Alsaffar, A. A. W.; Alam, M. S.
2003-11-01
A three-step modified signed-digit (MSD) adder is proposed which can be optically implmented using binary logic gates. The proposed scheme depends on encoding each MSD digits into a pair of binary digits using a two-state and multi-position based encoding scheme. The design algorithm depends on constructing the addition truth table of binary-coded MSD numbers and then using Karnaugh map to achieve output minimization. The functions associated with the optical binary logic gates are achieved by simply programming the decoding masks of an optical shadow-casting logic system.
Superlattice configurations in linear chain hydrocarbon binary mixtures
Indian Academy of Sciences (India)
Unknown
number of samples. PBV thanks Prof. Ramana Rao,. EIT, Mainafe, Eritrea Ms Marta Asmara University/. Library, Eritea and Mr Asmamaw Molla Debub. University, Dilla, Ethiopia for cooperation. References. 1. Shashikanth P B and Prasad P B V 1999 Bull. Mater. Sci. 22 65. 2. Shashikanth P B and Prasad P B V 2001 Cryst.
Superlattice configurations in linear chain hydrocarbon binary mixtures
Indian Academy of Sciences (India)
monoclinic, monoclinic-monoclinic) are realizable, because of discrete orientational changes in the alignment of molecules of -C28H58 hydrocarbon, through an angle , where = 1, 2, 3 … and angle has an average value of 3.3°.
Massive Black Hole Binary Evolution
Directory of Open Access Journals (Sweden)
Merritt David
2005-11-01
Full Text Available Coalescence of binary supermassive black holes (SBHs would constitute the strongest sources of gravitational waves to be observed by LISA. While the formation of binary SBHs during galaxy mergers is almost inevitable, coalescence requires that the separation between binary components first drop by a few orders of magnitude, due presumably to interaction of the binary with stars and gas in a galactic nucleus. This article reviews the observational evidence for binary SBHs and discusses how they would evolve. No completely convincing case of a bound, binary SBH has yet been found, although a handful of systems (e.g. interacting galaxies; remnants of galaxy mergers are now believed to contain two SBHs at projected separations of <~ 1kpc. N-body studies of binary evolution in gas-free galaxies have reached large enough particle numbers to reproduce the slow, “diffusive” refilling of the binary’s loss cone that is believed to characterize binary evolution in real galactic nuclei. While some of the results of these simulations - e.g. the binary hardening rate and eccentricity evolution - are strongly N-dependent, others - e.g. the “damage” inflicted by the binary on the nucleus - are not. Luminous early-type galaxies often exhibit depleted cores with masses of ~ 1-2 times the mass of their nuclear SBHs, consistent with the predictions of the binary model. Studies of the interaction of massive binaries with gas are still in their infancy, although much progress is expected in the near future. Binary coalescence has a large influence on the spins of SBHs, even for mass ratios as extreme as 10:1, and evidence of spin-flips may have been observed.
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
An, Shengli; Zhang, Yanhong; Chen, Zheng
2012-12-01
To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.
Binary rf pulse compression experiment at SLAC
International Nuclear Information System (INIS)
Lavine, T.L.; Spalek, G.; Farkas, Z.D.; Menegat, A.; Miller, R.H.; Nantista, C.; Wilson, P.B.
1990-06-01
Using rf pulse compression it will be possible to boost the 50- to 100-MW output expected from high-power microwave tubes operating in the 10- to 20-GHz frequency range, to the 300- to 1000-MW level required by the next generation of high-gradient linacs for linear for linear colliders. A high-power X-band three-stage binary rf pulse compressor has been implemented and operated at the Stanford Linear Accelerator Center (SLAC). In each of three successive stages, the rf pulse-length is compressed by half, and the peak power is approximately doubled. The experimental results presented here have been obtained at low-power (1-kW) and high-power (15-MW) input levels in initial testing with a TWT and a klystron. Rf pulses initially 770 nsec long have been compressed to 60 nsec. Peak power gains of 1.8 per stage, and 5.5 for three stages, have been measured. This corresponds to a peak power compression efficiency of about 90% per stage, or about 70% for three stages, consistent with the individual component losses. The principle of operation of a binary pulse compressor (BPC) is described in detail elsewhere. We recently have implemented and operated at SLAC a high-power (high-vacuum) three-stage X-band BPC. First results from the high-power three-stage BPC experiment are reported here
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Indian Academy of Sciences (India)
message symbols downstream, network coding achieves vast performance gains by permitting intermediate nodes to carry out algebraic oper- ations on the incoming data. In this article we present a tutorial introduction to network coding as well as an application to the e±cient operation of distributed data-storage networks.
Binary black holes on a budget: simulations using workstations
International Nuclear Information System (INIS)
Marronetti, Pedro; Tichy, Wolfgang; Bruegmann, Bernd; Gonzalez, Jose; Hannam, Mark; Husa, Sascha; Sperhake, Ulrich
2007-01-01
Binary black hole simulations have traditionally been computationally very expensive: current simulations are performed in supercomputers involving dozens if not hundreds of processors, thus systematic studies of the parameter space of binary black hole encounters still seem prohibitive with current technology. Here we show how the multi-layered refinement level code BAM can be used on dual processor workstations to simulate certain binary black hole systems. BAM, based on the moving punctures method, provides grid structures composed of boxes of increasing resolution near the centre of the grid. In the case of binaries, the highest resolution boxes are placed around each black hole and they track them in their orbits until the final merger when a single set of levels surrounds the black hole remnant. This is particularly useful when simulating spinning black holes since the gravitational fields gradients are larger. We present simulations of binaries with equal mass black holes with spins parallel to the binary axis and intrinsic magnitude of S/m 2 = 0.75. Our results compare favourably to those of previous simulations of this particular system. We show that the moving punctures method produces stable simulations at maximum spatial resolutions up to M/160 and for durations of up to the equivalent of 20 orbital periods
Binary Cockroach Swarm Optimization for Combinatorial Optimization Problem
Directory of Open Access Journals (Sweden)
Ibidun Christiana Obagbuwa
2016-09-01
Full Text Available The Cockroach Swarm Optimization (CSO algorithm is inspired by cockroach social behavior. It is a simple and efficient meta-heuristic algorithm and has been applied to solve global optimization problems successfully. The original CSO algorithm and its variants operate mainly in continuous search space and cannot solve binary-coded optimization problems directly. Many optimization problems have their decision variables in binary. Binary Cockroach Swarm Optimization (BCSO is proposed in this paper to tackle such problems and was evaluated on the popular Traveling Salesman Problem (TSP, which is considered to be an NP-hard Combinatorial Optimization Problem (COP. A transfer function was employed to map a continuous search space CSO to binary search space. The performance of the proposed algorithm was tested firstly on benchmark functions through simulation studies and compared with the performance of existing binary particle swarm optimization and continuous space versions of CSO. The proposed BCSO was adapted to TSP and applied to a set of benchmark instances of symmetric TSP from the TSP library. The results of the proposed Binary Cockroach Swarm Optimization (BCSO algorithm on TSP were compared to other meta-heuristic algorithms.
Linear programming using Matlab
Ploskas, Nikolaos
2017-01-01
This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus. The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...
Elements of algebraic coding systems
Cardoso da Rocha, Jr, Valdemar
2014-01-01
Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...
Energy Technology Data Exchange (ETDEWEB)
Hosokawa, Masanari [Research Organization for Information Science and Technology, Tokai, Ibaraki (Japan); Takizuka, Tomonori [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
2000-05-01
The divertor is expected to play key roles in tokamak reactors, such as ITER, for the heat removal, ash exhaust, and impurity shielding. Its performance is being predicted by using comprehensive simulation codes with the fluid model. In the fluid model for scrape-off layer (SOL) and divertor plasmas, various physics models are introduced. Kinetic approach is required to examine the validity of such physics models. One of the most powerful kinetic models is the particle simulation. Therefore a particle code PARASOL has been developed, and is being used for the simulation study of SOL and divertor plasmas. The PARASOL code treats the plasma bounded by two divertor plates, in which motions of ions and electrons are traced by using a electrostatic PIC method. Effects of Coulomb collisions are simulated by using a Monte-Carlo=method binary collision model. Motions of neutral particles are traced simultaneously with charged particles. In this report, we describe the physics model of PARASOL, the numerical methods, the configuration of the program, input parameters, output formats, samples of simulation results, the parallel computing method. The efficiency of the parallel computing with Paragon XP/S15-256 is demonstrated. (author)
Analysis and Defense of Vulnerabilities in Binary Code
2008-09-29
applications. Currently, protocol reverse engineering is mostly manual. For example, it took the open source Samba project over 10 years of work to reverse...2004. [100] Andrew Tridgell. How samba was written. http://www.samba.org/ ftp/tridge/misc/french_cafe.txt, August 2003. URL Checked on 8/21/2008. [101
Binary CFG Rebuilt of Self-Modifying Codes
2016-10-03
Modern malware extensively applies self-modifying obfuscation techniques, e.g., self-decryption and mutation, which are often automatically prepared...industry to analyze malware is a dynamic analysis in a sand-box. Alternatively, we apply a hybrid method combining concolic testing (dynamic symbolic...strong disassembly ability (control flow graph generation) at the cost of relatively heavy execution. For instance, BE-PUM automatically detects the
Spinodal decomposition of chemically reactive binary mixtures
Lamorgese, A.; Mauri, R.
2016-08-01
We simulate the influence of a reversible isomerization reaction on the phase segregation process occurring after spinodal decomposition of a deeply quenched regular binary mixture, restricting attention to systems wherein material transport occurs solely by diffusion. Our theoretical approach follows a diffuse-interface model of partially miscible binary mixtures wherein the coupling between reaction and diffusion is addressed within the frame of nonequilibrium thermodynamics, leading to a linear dependence of the reaction rate on the chemical affinity. Ultimately, the rate for an elementary reaction depends on the local part of the chemical potential difference since reaction is an inherently local phenomenon. Based on two-dimensional simulation results, we express the competition between segregation and reaction as a function of the Damköhler number. For a phase-separating mixture with components having different physical properties, a skewed phase diagram leads, at large times, to a system converging to a single-phase equilibrium state, corresponding to the absolute minimum of the Gibbs free energy. This conclusion continues to hold for the critical phase separation of an ideally perfectly symmetric binary mixture, where the choice of final equilibrium state at large times depends on the initial mean concentration being slightly larger or less than the critical concentration.
Energy Technology Data Exchange (ETDEWEB)
Morales Mendoza, N. [INQUIMAE, CONICET-UBA, Ciudad Universitaria, Pab2, (C1428EHA) Bs As (Argentina); LPyMC, Dep. De Fisica, FCEN-UBA and IFIBA -CONICET, Ciudad Universitaria, Cap. Fed. (Argentina); Goyanes, S. [LPyMC, Dep. De Fisica, FCEN-UBA and IFIBA -CONICET, Ciudad Universitaria, Cap. Fed. (Argentina); Chiliotte, C.; Bekeris, V. [LBT, Dep. De Fisica, FCEN-UBA. Ciudad Universitaria, Pab1, C1428EGA CABA (Argentina); Rubiolo, G. [LPyMC, Dep. De Fisica, FCEN-UBA and IFIBA -CONICET, Ciudad Universitaria, Cap. Fed. (Argentina); Unidad de Actividad Materiales, CNEA, Av Gral. Paz 1499, San Martin (1650), Prov. de Bs As (Argentina); Candal, R., E-mail: candal@qi.fcen.uba.ar [INQUIMAE, CONICET-UBA, Ciudad Universitaria, Pab2, (C1428EHA) Bs As (Argentina); Escuela de Ciencia y Tecnologia, 3iA, Universidad de Gral. San Martin, San Martin, Prov. Bs As (Argentina)
2012-08-15
Magnetic binary nanofillers containing multiwall carbon nanotubes (MWCNT) and hercynite were synthesized by Chemical Vapor Deposition (CVD) on Fe/AlOOH prepared by the sol-gel method. The catalyst precursor was fired at 450 Degree-Sign C, ground and sifted through different meshes. Two powders were obtained with different particle sizes: sample A (50-75 {mu}m) and sample B (smaller than 50 {mu}m). These powders are composed of iron oxide particles widely dispersed in the non-crystalline matrix of aluminum oxide and they are not ferromagnetic. After reduction process the powders are composed of {alpha}-Fe nanoparticles inside hercynite matrix. These nanofillers are composed of hercynite containing {alpha}-Fe nanoparticles and MWCNT. The binary magnetic nanofillers were slightly ferromagnetic. The saturation magnetization of the nanofillers depended on the powder particle size. The nanofiller obtained from powder particles in the range 50-75 {mu}m showed a saturation magnetization 36% higher than the one formed from powder particles smaller than 50 {mu}m. The phenomenon is explained in terms of changes in the magnetic environment of the particles as consequence of the presence of MWCNT.
Efficient convolutional sparse coding
Energy Technology Data Exchange (ETDEWEB)
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
DEFF Research Database (Denmark)
Ejsing-Duun, Stine; Hansbøl, Mikala
Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017....... Coding Class projektet er et pilotprojekt, hvor en række skoler i København og Vejle kommuner har igangsat undervisningsaktiviteter med fokus på kodning og programmering i skolen. Evalueringen og dokumentationen af projektet omfatter kvalitative nedslag i udvalgte undervisningsinterventioner i efteråret...
Directory of Open Access Journals (Sweden)
Anthony McCosker
2014-03-01
Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.
Rieger, Samantha
2015-05-01
Recent observations have found that some contact binaries are oriented such that the secondary impacts with the primary at a high inclination. This research investigates the evolution of how such contact binaries came to exist. This process begins with an asteroid pair, where the secondary lies on the Laplace plane. The Laplace plane is a plane normal to the axis about which the pole of a satellites orbit precesses, causing a near constant inclination for such an orbit. For the study of the classical Laplace plane, the secondary asteroid is in circular orbit around an oblate primary with axial tilt. This system is also orbiting the Sun. Thus, there are two perturbations on the secondarys orbit: J2 and third body Sun perturbations. The Laplace surface is defined as the group of orbits that lie on the Laplace plane at varying distances from the primary. If the secondary is very close to the primary, the inclination of the Laplace plane will be near the equator of the asteroid, while further from the primary the inclination will be similar to the asteroid-Sun plane. The secondary will lie on the Laplace plane because near the asteroid the Laplace plane is stable to large deviations in motion, causing the asteroid to come to rest in this orbit. Assuming the secondary is asymmetrical in shape and the bodys rotation is synchronous with its orbit, the secondary will experience the BYORP effect. BYORP can cause secular motion such as the semi-major axis of the secondary expanding or contracting. Assuming the secondary expands due to BYORP, the secondary will eventually reach the unstable region of the Laplace plane. The unstable region exists if the primary has an obliquity of 68.875 degrees or greater. The unstable region exists at 0.9 Laplace radius to 1.25 Laplace radius, where the Laplace radius is defined as the distance from the central body where the inclination of the Laplace plane orbit is half the obliquity. In the unstable region, the eccentricity of the orbit
Formation and Evolution of X-ray Binaries
Shao, Y.
2017-07-01
use of both binary population synthesis and detailed binary evolution calculations. We find that the birthrate is around 10-4 yr-1 for the incipient X-ray binaries in both cases. We demonstrate the distribution of the ULX population in the donor mass - orbital period plane. Our results suggest that, compared with black hole X-ray binaries, neutron star X-ray binaries may significantly contribute to the ULX population, and high/intermediate-mass X-ray binaries dominate the neutron star ULX population in M82/Milky Way-like galaxies, respectively. In Chapter 6, the population of intermediate- and low-mass X-ray binaries in the Galaxy is explored. We investigate the formation and evolutionary sequences of Galactic intermediate- and low-mass X-ray binaries by combining binary population synthesis (BPS) and detailed stellar evolutionary calculations. Using an updated BPS code we compute the evolution of massive binaries that leads to the formation of incipient I/LMXBs, and present their distribution in the initial donor mass vs. initial orbital period diagram. We then follow the evolution of I/LMXBs until the formation of binary millisecond pulsars (BMSPs). We show that during the evolution of I/LMXBs they are likely to be observed as relatively compact binaries. The resultant BMSPs have orbital periods ranging from about 1 day to a few hundred days. These features are consistent with observations of LMXBs and BMSPs. We also confirm the discrepancies between theoretical predictions and observations mentioned in the literature, that is, the theoretical average mass transfer rates of LMXBs are considerably lower than observed, and the number of BMSPs with orbital periods ˜ 0.1-1 \\unit{d} is severely underestimated. Both imply that something is missing in the modeling of LMXBs, which is likely to be related to the mechanisms of the orbital angular momentum loss. Finally in Chapter 7 we summarize our results and give the prospects for the future work.
Relativistic Binaries in Globular Clusters
Directory of Open Access Journals (Sweden)
Matthew J. Benacquista
2013-03-01
Full Text Available Galactic globular clusters are old, dense star systems typically containing 10^4 – 10^6 stars. As an old population of stars, globular clusters contain many collapsed and degenerate objects. As a dense population of stars, globular clusters are the scene of many interesting close dynamical interactions between stars. These dynamical interactions can alter the evolution of individual stars and can produce tight binary systems containing one or two compact objects. In this review, we discuss theoretical models of globular cluster evolution and binary evolution, techniques for simulating this evolution that leads to relativistic binaries, and current and possible future observational evidence for this population. Our discussion of globular cluster evolution will focus on the processes that boost the production of tight binary systems and the subsequent interaction of these binaries that can alter the properties of both bodies and can lead to exotic objects. Direct N-body integrations and Fokker–Planck simulations of the evolution of globular clusters that incorporate tidal interactions and lead to predictions of relativistic binary populations are also discussed. We discuss the current observational evidence for cataclysmic variables, millisecond pulsars, and low-mass X-ray binaries as well as possible future detection of relativistic binaries with gravitational radiation.
Wang, Jim Jing-Yan
2014-07-06
Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.
NONLINEAR TIDES IN CLOSE BINARY SYSTEMS
International Nuclear Information System (INIS)
Weinberg, Nevin N.; Arras, Phil; Quataert, Eliot; Burkart, Josh
2012-01-01
We study the excitation and damping of tides in close binary systems, accounting for the leading-order nonlinear corrections to linear tidal theory. These nonlinear corrections include two distinct physical effects: three-mode nonlinear interactions, i.e., the redistribution of energy among stellar modes of oscillation, and nonlinear excitation of stellar normal modes by the time-varying gravitational potential of the companion. This paper, the first in a series, presents the formalism for studying nonlinear tides and studies the nonlinear stability of the linear tidal flow. Although the formalism we present is applicable to binaries containing stars, planets, and/or compact objects, we focus on non-rotating solar-type stars with stellar or planetary companions. Our primary results include the following: (1) The linear tidal solution almost universally used in studies of binary evolution is unstable over much of the parameter space in which it is employed. More specifically, resonantly excited internal gravity waves in solar-type stars are nonlinearly unstable to parametric resonance for companion masses M' ∼> 10-100 M ⊕ at orbital periods P ≈ 1-10 days. The nearly static 'equilibrium' tidal distortion is, however, stable to parametric resonance except for solar binaries with P ∼ 3 [P/10 days] for a solar-type star) and drives them as a single coherent unit with growth rates that are a factor of ≈N faster than the standard three-wave parametric instability. These are local instabilities viewed through the lens of global analysis; the coherent global growth rate follows local rates in the regions where the shear is strongest. In solar-type stars, the dynamical tide is unstable to this collective version of the parametric instability for even sub-Jupiter companion masses with P ∼< a month. (4) Independent of the parametric instability, the dynamical and equilibrium tides excite a wide range of stellar p-modes and g-modes by nonlinear inhomogeneous forcing
Entanglement-assisted quantum MDS codes from negacyclic codes
Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang
2018-03-01
The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.
Spectral properties of binary asteroids
Pajuelo, Myriam; Birlan, Mirel; Carry, Benoît; DeMeo, Francesca E.; Binzel, Richard P.; Berthier, Jérôme
2018-04-01
We present the first attempt to characterize the distribution of taxonomic class among the population of binary asteroids (15% of all small asteroids). For that, an analysis of 0.8-2.5{μ m} near-infrared spectra obtained with the SpeX instrument on the NASA/IRTF is presented. Taxonomic class and meteorite analog is determined for each target, increasing the sample of binary asteroids with known taxonomy by 21%. Most binary systems are bound in the S-, X-, and C- classes, followed by Q and V-types. The rate of binary systems in each taxonomic class agrees within uncertainty with the background population of small near-Earth objects and inner main belt asteroids, but for the C-types which are under-represented among binaries.
Planets in Binary Star Systems
Haghighipour, Nader
2010-01-01
The discovery of extrasolar planets over the past decade has had major impacts on our understanding of the formation and dynamical evolution of planetary systems. There are features and characteristics unseen in our solar system and unexplainable by the current theories of planet formation and dynamics. Among these new surprises is the discovery of planets in binary and multiple-star systems. The discovery of such "binary-planetary" systems has confronted astrodynamicists with many new challenges, and has led them to re-examine the theories of planet formation and dynamics. Among these challenges are: How are planets formed in binary star systems? What would be the notion of habitability in such systems? Under what conditions can binary star systems have habitable planets? How will volatiles necessary for life appear on such planets? This volume seeks to gather the current research in the area of planets in binary and multistar systems and to familiarize readers with its associated theoretical and observation...
Unobserved Heterogeneity in the Binary Logit Model with Cross-Sectional Data and Short Panels
DEFF Research Database (Denmark)
Holm, Anders; Jæger, Mads Meier; Pedersen, Morten
This paper proposes a new approach to dealing with unobserved heterogeneity in applied research using the binary logit model with cross-sectional data and short panels. Unobserved heterogeneity is particularly important in non-linear regression models such as the binary logit model because, unlike...... in linear regression models, estimates of the effects of observed independent variables are biased even when omitted independent variables are uncorrelated with the observed independent variables. We propose an extension of the binary logit model based on a finite mixture approach in which we conceptualize...
Long-term variability of low-mass X-ray binaries
Directory of Open Access Journals (Sweden)
Filippova E.
2014-01-01
Full Text Available We consider modulations of mass captured by the compact object from the companion star’s stellar wind in Low Mass X-ray Binaries with late type giants. Based on 3D simulations with two different hydrodynamic codes used Lagrangian and Eulerian approaches – the SPH code GADGET and the Eulerian code PLUTO, we conclude that a hydrodynamical interaction of the wind matter within a binary system even without eccentricity results in variability of the mass accretion rate with characteristic time-scales close to the orbital period. Observational appearances of this wind might be similar to that of an accretion disc corona/wind.
Embedding intensity image into a binary hologram with strong noise resistant capability
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
BINARY ASTROMETRIC MICROLENSING WITH GAIA
Energy Technology Data Exchange (ETDEWEB)
Sajadian, Sedighe, E-mail: sajadian@ipm.ir [School of Astronomy, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Department of Physics, Sharif University of Technology, P.O. Box 11155-9161, Tehran (Iran, Islamic Republic of)
2015-04-15
We investigate whether or not Gaia can specify the binary fractions of massive stellar populations in the Galactic disk through astrometric microlensing. Furthermore, we study whether or not some information about their mass distributions can be inferred via this method. In this regard, we simulate the binary astrometric microlensing events due to massive stellar populations according to the Gaia observing strategy by considering (i) stellar-mass black holes, (ii) neutron stars, (iii) white dwarfs, and (iv) main-sequence stars as microlenses. The Gaia efficiency for detecting the binary signatures in binary astrometric microlensing events is ∼10%–20%. By calculating the optical depth due to the mentioned stellar populations, the numbers of the binary astrometric microlensing events being observed with Gaia with detectable binary signatures, for the binary fraction of about 0.1, are estimated to be 6, 11, 77, and 1316, respectively. Consequently, Gaia can potentially specify the binary fractions of these massive stellar populations. However, the binary fraction of black holes measured with this method has a large uncertainty owing to a low number of the estimated events. Knowing the binary fractions in massive stellar populations helps with studying the gravitational waves. Moreover, we investigate the number of massive microlenses for which Gaia specifies masses through astrometric microlensing of single lenses toward the Galactic bulge. The resulting efficiencies of measuring the mass of mentioned populations are 9.8%, 2.9%, 1.2%, and 0.8%, respectively. The numbers of their astrometric microlensing events being observed in the Gaia era in which the lens mass can be inferred with the relative error less than 0.5 toward the Galactic bulge are estimated as 45, 34, 76, and 786, respectively. Hence, Gaia potentially gives us some information about the mass distribution of these massive stellar populations.
Period variation studies of six contact binaries in M4
Rukmini, Jagirdar; Shanti Priya, Devarapalli
2018-04-01
We present the first period study of six contact binaries in the closest globular cluster M4 the data collected from June 1995‑June 2009 and Oct 2012‑Sept 2013. New times of minima are determined for all the six variables and eclipse timing (O-C) diagrams along with the quadratic fit are presented. For all the variables, the study of (O-C) variations reveals changes in the periods. In addition, the fundamental parameters for four of the contact binaries obtained using the Wilson-Devinney code (v2003) are presented. Planned observations of these binaries using the 3.6-m Devasthal Optical Telescope (DOT) and the 4-m International Liquid Mirror Telescope (ILMT) operated by the Aryabhatta Research Institute of Observational Sciences (ARIES; Nainital) can throw light on their evolutionary status from long term period variation studies.
Collisional Dynamics around Binary Black Holes in Galactic Centers
Hemsendorf, Marc; Sigurdsson, Steinn; Spurzem, Rainer
2002-12-01
We follow the sinking of two massive black holes in a spherical stellar system where the black holes become bound under the influence of dynamical friction. Once bound, the binary hardens by three-body encounters with surrounding stars. We find that the binary wanders inside the core, providing an enhanced supply of reaction partners for the hardening. The binary evolves into a highly eccentric orbit leading to coalescence well beyond a Hubble time. These are the first results from a hybrid ``self-consistent field'' (SCF) and direct Aarseth N-body integrator (NBODY6), which combines the advantages of the direct force calculation with the efficiency of the field method. The code is designed for use on parallel architectures and is therefore applicable to collisional N-body integrations with extraordinarily large particle numbers (>105). This creates the possibility of simulating the dynamics of both globular clusters with realistic collisional relaxation and stellar systems surrounding supermassive black holes in galactic nuclei.
Multilevel Cross-Dependent Binary Longitudinal Data
Serban, Nicoleta
2013-10-16
We provide insights into new methodology for the analysis of multilevel binary data observed longitudinally, when the repeated longitudinal measurements are correlated. The proposed model is logistic functional regression conditioned on three latent processes describing the within- and between-variability, and describing the cross-dependence of the repeated longitudinal measurements. We estimate the model components without employing mixed-effects modeling but assuming an approximation to the logistic link function. The primary objectives of this article are to highlight the challenges in the estimation of the model components, to compare two approximations to the logistic regression function, linear and exponential, and to discuss their advantages and limitations. The linear approximation is computationally efficient whereas the exponential approximation applies for rare events functional data. Our methods are inspired by and applied to a scientific experiment on spectral backscatter from long range infrared light detection and ranging (LIDAR) data. The models are general and relevant to many new binary functional data sets, with or without dependence between repeated functional measurements.
DNA Barcoding through Quaternary LDPC Codes.
Directory of Open Access Journals (Sweden)
Elizabeth Tapia
Full Text Available For many parallel applications of Next-Generation Sequencing (NGS technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH or have intrinsic poor error correcting abilities (Hamming. Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9 at the expense of a rate of read losses just in the order of 10(-6.
Cognitive Code-Division Links with Blind Primary-System Identification
2011-11-01
covert communications, steganography, compressive sam- pling, adaptive multiuser detection , robust spread-spectrum communications, supervised and...system. We first develop a blind primary-user identification scheme to detect the binary code sequences (signatures) utilized by primary users. To...transmitting power and binary code-channel assignment in accordance with the detected primary code channels to avoid "harmful" interference. At the same
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Feedback equivalence of convolutional codes over finite rings
Directory of Open Access Journals (Sweden)
DeCastro-García Noemí
2017-12-01
Full Text Available The approach to convolutional codes from the linear systems point of view provides us with effective tools in order to construct convolutional codes with adequate properties that let us use them in many applications. In this work, we have generalized feedback equivalence between families of convolutional codes and linear systems over certain rings, and we show that every locally Brunovsky linear system may be considered as a representation of a code under feedback convolutional equivalence.
Energy Technology Data Exchange (ETDEWEB)
Timchalk, Chuck; Poet, Torka S.
2008-05-01
Physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) models have been developed and validated for the organophosphorus (OP) insecticides chlorpyrifos (CPF) and diazinon (DZN). Based on similar pharmacokinetic and mode of action properties it is anticipated that these OPs could interact at a number of important metabolic steps including: CYP450 mediated activation/detoxification, and blood/tissue cholinesterase (ChE) binding/inhibition. We developed a binary PBPK/PD model for CPF, DZN and their metabolites based on previously published models for the individual insecticides. The metabolic interactions (CYP450) between CPF and DZN were evaluated in vitro and suggests that CPF is more substantially metabolized to its oxon metabolite than is DZN. These data are consistent with their observed in vivo relative potency (CPF>DZN). Each insecticide inhibited the other’s in vitro metabolism in a concentration-dependent manner. The PBPK model code used to described the metabolism of CPF and DZN was modified to reflect the type of inhibition kinetics (i.e. competitive vs. non-competitive). The binary model was then evaluated against previously published rodent dosimetry and ChE inhibition data for the mixture. The PBPK/PD model simulations of the acute oral exposure to single- (15 mg/kg) vs. binary-mixtures (15+15 mg/kg) of CFP and DZN at this lower dose resulted in no differences in the predicted pharmacokinetics of either the parent OPs or their respective metabolites; whereas, a binary oral dose of CPF+DZN at 60+60 mg/kg did result in observable changes in the DZN pharmacokinetics. Cmax was more reasonably fit by modifying the absorption parameters. It is anticipated that at low environmentally relevant binary doses, most likely to be encountered in occupational or environmental related exposures, that the pharmacokinetics are expected to be linear, and ChE inhibition dose-additive.
International Nuclear Information System (INIS)
Rattan, D.S.
1993-11-01
NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases
DEFF Research Database (Denmark)
Cox, Geoff
Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and softwa...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....
Content identification: binary content fingerprinting versus binary content encoding
Ferdowsi, Sohrab; Voloshynovskiy, Svyatoslav; Kostadinov, Dimche
2014-02-01
In this work, we address the problem of content identification. We consider content identification as a special case of multiclass classification. The conventional approach towards identification is based on content fingerprinting where a short binary content description known as a fingerprint is extracted from the content. We propose an alternative solution based on elements of machine learning theory and digital communications. Similar to binary content fingerprinting, binary content representation is generated based on a set of trained binary classifiers. We consider several training/encoding strategies and demonstrate that the proposed system can achieve the upper theoretical performance limits of content identification. The experimental results were carried out both on a synthetic dataset with different parameters and the FAMOS dataset of microstructures from consumer packages.
Optimally cloned binary coherent states
Müller, C. R.; Leuchs, G.; Marquardt, Ch.; Andersen, U. L.
2017-10-01
Binary coherent state alphabets can be represented in a two-dimensional Hilbert space. We capitalize this formal connection between the otherwise distinct domains of qubits and continuous variable states to map binary phase-shift keyed coherent states onto the Bloch sphere and to derive their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal cloner.
Portmanteau constructions, phrase structure and linearization
Directory of Open Access Journals (Sweden)
Brian Hok-Shing Chan
2015-12-01
Full Text Available In bilingual code-switching which involves language-pairs with contrasting head-complement orders (i.e. head-initial vs head-final, a head may be lexicalized from both languages with its complement sandwiched in the middle. These so-called portmanteau sentences (Nishimura, 1985, 1986; Sankoff, Poplack, and Vanniarajan, 1990, etc. have been attested for decades, but they had never received a systematic, formal analysis in terms of current syntactic theory before a few recent attempts (Hicks, 2010, 2012. Notwithstanding this lack of attention, these structures are in fact highly relevant to theories of linearization and phrase structure. More specifically, they challenge binary-branching (Kayne, 1994, 2004, 2005 as well as the Antisymmetry hypothesis (ibid.. Not explained by current grammatical models of code-switching, including the Equivalence Constraint (Poplack, 1980, the Matrix Language Frame Model (Myers-Scotton, 1993, 2002, etc., and the Bilingual Speech Model (Muysken, 2000, 2013, the portmanteau construction indeed looks uncommon or abnormal, defying any systematic account. However, the recurrence of these structures in various datasets and constraints on them do call for an explanation. This paper suggests an account which lies with syntax and also with the psycholinguistics of bilingualism. Assuming that linearization is a process at the Sensori-Motor (SM interface (Chomsky, 2005; 2013, this paper sees that word order is not fixed in a syntactic tree but it is set in the production process, and much information of word order rests in the processor, for instance, outputting a head before its complement (i.e. head-initial word order or the reverse (i.e. head-final word order. As for the portmanteau construction, it is the output of bilingual speakers co-activating two sets of head-complement orders which summon the phonetic forms of the same word in both languages. Under this proposal, the underlying structure of a portmanteau
Portmanteau Constructions, Phrase Structure, and Linearization.
Chan, Brian Hok-Shing
2015-01-01
In bilingual code-switching which involves language-pairs with contrasting head-complement orders (i.e., head-initial vs. head-final), a head may be lexicalized from both languages with its complement sandwiched in the middle. These so-called "portmanteau" sentences (Nishimura, 1985, 1986; Sankoff et al., 1990, etc.) have been attested for decades, but they had never received a systematic, formal analysis in terms of current syntactic theory before a few recent attempts (Hicks, 2010, 2012). Notwithstanding this lack of attention, these structures are in fact highly relevant to theories of linearization and phrase structure. More specifically, they challenge binary-branching (Kayne, 1994, 2004, 2005) as well as the Antisymmetry hypothesis (ibid.). Not explained by current grammatical models of code-switching, including the Equivalence Constraint (Poplack, 1980), the Matrix Language Frame Model (Myers-Scotton, 1993, 2002, etc.), and the Bilingual Speech Model (Muysken, 2000, 2013), the portmanteau construction indeed looks uncommon or abnormal, defying any systematic account. However, the recurrence of these structures in various datasets and constraints on them do call for an explanation. This paper suggests an account which lies with syntax and also with the psycholinguistics of bilingualism. Assuming that linearization is a process at the Sensori-Motor (SM) interface (Chomsky, 2005, 2013), this paper sees that word order is not fixed in a syntactic tree but it is set in the production process, and much information of word order rests in the processor, for instance, outputting a head before its complement (i.e., head-initial word order) or the reverse (i.e., head-final word order). As for the portmanteau construction, it is the output of bilingual speakers co-activating two sets of head-complement orders which summon the phonetic forms of the same word in both languages. Under this proposal, the underlying structure of a portmanteau construction is as simple as an
Energy Technology Data Exchange (ETDEWEB)
Lindemuth, I.R.
1979-02-28
This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...
International Nuclear Information System (INIS)
Lindemuth, I.R.
1979-01-01
This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables
Indian Academy of Sciences (India)
Codes and Channels. A noisy communication channel is illustrated in Fig- ... nication channel. Suppose we want to transmit a message over the unreliable communication channel so that even if the channel corrupts some of the bits we are able to recover ..... is d-regular, meaning thereby that every vertex has de- gree d.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.
Indian Academy of Sciences (India)
Network coding is a technique to increase the amount of information °ow in a network by mak- ing the key observation that information °ow is fundamentally different from commodity °ow. Whereas, under traditional methods of opera- tion of data networks, intermediate nodes are restricted to simply forwarding their incoming.
Optical analogue of relativistic Dirac solitons in binary waveguide arrays
Energy Technology Data Exchange (ETDEWEB)
Tran, Truong X., E-mail: truong.tran@mpl.mpg.de [Department of Physics, Le Quy Don University, 236 Hoang Quoc Viet str., 10000 Hanoi (Viet Nam); Max Planck Institute for the Science of Light, Günther-Scharowsky str. 1, 91058 Erlangen (Germany); Longhi, Stefano [Department of Physics, Politecnico di Milano and Istituto di Fotonica e Nanotecnologie del Consiglio Nazionale delle Ricerche, Piazza L. da Vinci 32, I-20133 Milano (Italy); Biancalana, Fabio [Max Planck Institute for the Science of Light, Günther-Scharowsky str. 1, 91058 Erlangen (Germany); School of Engineering and Physical Sciences, Heriot-Watt University, EH14 4AS Edinburgh (United Kingdom)
2014-01-15
We study analytically and numerically an optical analogue of Dirac solitons in binary waveguide arrays in the presence of Kerr nonlinearity. Pseudo-relativistic soliton solutions of the coupled-mode equations describing dynamics in the array are analytically derived. We demonstrate that with the found soliton solutions, the coupled mode equations can be converted into the nonlinear relativistic 1D Dirac equation. This paves the way for using binary waveguide arrays as a classical simulator of quantum nonlinear effects arising from the Dirac equation, something that is thought to be impossible to achieve in conventional (i.e. linear) quantum field theory. -- Highlights: •An optical analogue of Dirac solitons in nonlinear binary waveguide arrays is suggested. •Analytical solutions to pseudo-relativistic solitons are presented. •A correspondence of optical coupled-mode equations with the nonlinear relativistic Dirac equation is established.
Evidence of a stable binary CdCa quasicrystalline phase
DEFF Research Database (Denmark)
Jiang, Jianzhong; Jensen, C.H.; Rasmussen, A.R.
2001-01-01
Quasicrystals with a primitive icosahedral structure and a quasilattice constant of 5.1215 Angstrom have been synthesized in a binary Cd-Ca system. The thermal stability of the quasicrystal has been investigated by in situ high-temperature x-ray powder diffraction using synchrotron radiation. It ....... It is demonstrated that the binary CdCa quasicrystal is thermodynamic stable up to its melting temperature. The linear thermal expansion coefficient of the quasicrystal is 2.765x10(-5) K-1. (C) 2001 American Institute of Physics.......Quasicrystals with a primitive icosahedral structure and a quasilattice constant of 5.1215 Angstrom have been synthesized in a binary Cd-Ca system. The thermal stability of the quasicrystal has been investigated by in situ high-temperature x-ray powder diffraction using synchrotron radiation...
Bourlès, Henri
2013-01-01
Linear systems have all the necessary elements (modeling, identification, analysis and control), from an educational point of view, to help us understand the discipline of automation and apply it efficiently. This book is progressive and organized in such a way that different levels of readership are possible. It is addressed both to beginners and those with a good understanding of automation wishing to enhance their knowledge on the subject. The theory is rigorously developed and illustrated by numerous examples which can be reproduced with the help of appropriate computation software. 60 exe
Reliable Physical Layer Network Coding
Nazer, Bobak; Gastpar, Michael
2011-01-01
When two or more users in a wireless network transmit simultaneously, their electromagnetic signals are linearly superimposed on the channel. As a result, a receiver that is interested in one of these signals sees the others as unwanted interference. This property of the wireless medium is typically viewed as a hindrance to reliable communication over a network. However, using a recently developed coding strategy, interference can in fact be harnessed for network coding. In a wired network, (...
Binary typing of staphylococcus aureus
W.B. van Leeuwen (Willem)
2002-01-01
textabstractThis thesis describes the development. application and validation of straindifferentiating DNA probes for the characterization of Staphylococcus aureus strains in a system. that yields a binary output. By comparing the differential hybridization of these DNA probes to staphylococcal
Mesoscopic model for binary fluids
Echeverria, C.; Tucci, K.; Alvarez-Llamoza, O.; Orozco-Guillén, E. E.; Morales, M.; Cosenza, M. G.
2017-10-01
We propose a model for studying binary fluids based on the mesoscopic molecular simulation technique known as multiparticle collision, where the space and state variables are continuous, and time is discrete. We include a repulsion rule to simulate segregation processes that does not require calculation of the interaction forces between particles, so binary fluids can be described on a mesoscopic scale. The model is conceptually simple and computationally efficient; it maintains Galilean invariance and conserves the mass and energy in the system at the micro- and macro-scale, whereas momentum is conserved globally. For a wide range of temperatures and densities, the model yields results in good agreement with the known properties of binary fluids, such as the density profile, interface width, phase separation, and phase growth. We also apply the model to the study of binary fluids in crowded environments with consistent results.
Constructing snake-in-the-box codes and families of such codes covering the hypercube
Haryanto, L.
2007-01-01
A snake-in-the-box code (or snake) is a list of binary words of length n such that each word differs from its successor in the list in precisely one bit position. Moreover, any two words in the list differ in at least two positions, unless they are neighbours in the list. The list is considered to
Formation of the first three gravitational-wave observations through isolated binary evolution.
Stevenson, Simon; Vigna-Gómez, Alejandro; Mandel, Ilya; Barrett, Jim W; Neijssel, Coenraad J; Perkins, David; de Mink, Selma E
2017-04-05
During its first four months of taking data, Advanced LIGO has detected gravitational waves from two binary black hole mergers, GW150914 and GW151226, along with the statistically less significant binary black hole merger candidate LVT151012. Here we use the rapid binary population synthesis code COMPAS to show that all three events can be explained by a single evolutionary channel-classical isolated binary evolution via mass transfer including a common envelope phase. We show all three events could have formed in low-metallicity environments (Z=0.001) from progenitor binaries with typical total masses ≳160M ⊙ , ≳60M ⊙ and ≳90M ⊙ , for GW150914, GW151226 and LVT151012, respectively.
Directory of Open Access Journals (Sweden)
Amin Asadi
2017-10-01
Full Text Available Purpose: To study the benefits of Directional Bremsstrahlung Splitting (DBS dose variance reduction technique in BEAMnrc Monte Carlo (MC code for Oncor® linac at 6MV and 18MV energies. Materials and Method: A MC model of Oncor® linac was built using BEAMnrc MC Code and verified by the measured data for 6MV and 18MV energies of various field sizes. Then Oncor® machine was modeled running DBS technique, and the efficiency of total fluence and spatial fluence for electron and photon, the efficiency of dose variance reduction of MC calculations for PDD on the central beam axis and lateral dose profile across the nominal field was measured and compared. Result: With applying DBS technique, the total fluence of electron and photon increased in turn 626.8 (6MV and 983.4 (6MV, and 285.6 (18MV and 737.8 (18MV, the spatial fluence of electron and photon improved in turn 308.6±1.35% (6MV and 480.38±0.43% (6MV, and 153±0.9% (18MV and 462.6±0.27% (18MV. Moreover, by running DBS technique, the efficiency of dose variance reduction for PDD MC dose calculations before maximum dose point and after dose maximum point enhanced 187.8±0.68% (6MV and 184.6±0.65% (6MV, 156±0.43% (18MV and 153±0.37% (18MV, respectively, and the efficiency of MC calculations for lateral dose profile remarkably on the central beam axis and across the treatment field raised in turn 197±0.66% (6MV and 214.6±0.73% (6MV, 175±0.36% (18MV and 181.4±0.45% (18MV. Conclusion: Applying dose variance reduction technique of DBS for modeling Oncor® linac with using BEAMnrc MC Code surprisingly improved the fluence of electron and photon, and it therefore enhanced the efficiency of dose variance reduction for MC calculations. As a result, running DBS in different kinds of MC simulation Codes might be beneficent in reducing the uncertainty of MC calculations.
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Eclipsing Binaries From the CSTAR Project at Dome A, Antarctica
Yang, Ming; Zhang, Hui; Wang, Songhu; Zhou, Ji-Lin; Zhou, Xu; Wang, Lingzhi; Wang, Lifan; Wittenmyer, R. A.; Liu, Hui-Gen; Meng, Zeyang; Ashley, M. C. B.; Storey, J. W. V.; Bayliss, D.; Tinney, Chris; Wang, Ying; Wu, Donghong; Liang, Ensi; Yu, Zhouyi; Fan, Zhou; Feng, Long-Long; Gong, Xuefei; Lawrence, J. S.; Liu, Qiang; Luong-Van, D. M.; Ma, Jun; Wu, Zhenyu; Yan, Jun; Yang, Huigen; Yang, Ji; Yuan, Xiangyan; Zhang, Tianmeng; Zhu, Zhenxi; Zou, Hu
2015-04-01
The Chinese Small Telescope ARray (CSTAR) has observed an area around the Celestial South Pole at Dome A since 2008. About 20,000 light curves in the i band were obtained during the observation season lasting from 2008 March to July. The photometric precision achieves about 4 mmag at i = 7.5 and 20 mmag at i = 12 within a 30 s exposure time. These light curves are analyzed using Lomb-Scargle, Phase Dispersion Minimization, and Box Least Squares methods to search for periodic signals. False positives may appear as a variable signature caused by contaminating stars and the observation mode of CSTAR. Therefore, the period and position of each variable candidate are checked to eliminate false positives. Eclipsing binaries are removed by visual inspection, frequency spectrum analysis, and a locally linear embedding technique. We identify 53 eclipsing binaries in the field of view of CSTAR, containing 24 detached binaries, 8 semi-detached binaries, 18 contact binaries, and 3 ellipsoidal variables. To derive the parameters of these binaries, we use the Eclipsing Binaries via Artificial Intelligence method. The primary and secondary eclipse timing variations (ETVs) for semi-detached and contact systems are analyzed. Correlated primary and secondary ETVs confirmed by false alarm tests may indicate an unseen perturbing companion. Through ETV analysis, we identify two triple systems (CSTAR J084612.64-883342.9 and CSTAR J220502.55-895206.7). The orbital parameters of the third body in CSTAR J220502.55-895206.7 are derived using a simple dynamical model.
Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning
Rachkovskij, Dmitri A.; Kussul, Ernst M.
2001-01-01
Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's Holographic Reduced Representations and Kanerva's Binary Spatter Codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this paper we consider procedures of the Context-Dependent Thinning which were developed for representation of complex hierarchical items in the...
International Nuclear Information System (INIS)
Altomare, S.; Minton, G.
1975-02-01
PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)
Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data
Palumbo, Francesco; D'Enza, Alfonso Iodice
The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.
A linear-time transformation of linear inequalities into conjunctive normal form
J.P. Warners
1996-01-01
textabstractWe present a technique that transforms any binary programming problem with integral coefficients to a satisfiability problem of propositional logic in linear time. Preliminary computational experience using this transformation, shows that a pure logical solver can be a valuable tool for
Be discs in coplanar circular binaries: Phase-locked variations of emission lines
Panoglou, Despina; Faes, Daniel M.; Carciofi, Alex C.; Okazaki, Atsuo T.; Baade, Dietrich; Rivinius, Thomas; Borges Fernandes, Marcelo
2018-01-01
In this paper, we present the first results of radiative transfer calculations on decretion discs of binary Be stars. A smoothed particle hydrodynamics code computes the structure of Be discs in coplanar circular binary systems for a range of orbital and disc parameters. The resulting disc configuration consists of two spiral arms, and this can be given as input into a Monte Carlo code, which calculates the radiative transfer along the line of sight for various observational coordinates. Making use of the property of steady disc structure in coplanar circular binaries, observables are computed as functions of the orbital phase. Some orbital-phase series of line profiles are given for selected parameter sets under various viewing angles, to allow comparison with observations. Flat-topped profiles with and without superimposed multiple structures are reproduced, showing, for example, that triple-peaked profiles do not have to be necessarily associated with warped discs and misaligned binaries. It is demonstrated that binary tidal effects give rise to phase-locked variability of the violet-to-red (V/R) ratio of hydrogen emission lines. The V/R ratio exhibits two maxima per cycle; in certain cases those maxima are equal, leading to a clear new V/R cycle every half orbital period. This study opens a way to identifying binaries and to constraining the parameters of binary systems that exhibit phase-locked variations induced by tidal interaction with a companion star.
Effects of Hardness of Primordial Binaries on Evolution of Star Clusters
Tanikawa, A.; Fukushige, T.
2008-05-01
We performed N-body simulations of star clusters with primordial binaries using a new code, GORILLA. It is based on Makino and Aarseth (1992)'s integration scheme on GRAPE, and includes a special treatment for relatively isolated binaries. Using the new code, we investigated effects of hardness of primordial binaries on whole evolution of the clusters. We simulated seven N=16384 equal-mass clusters containing 10% (in mass) primordial binaries whose binding energies are 1, 3, 10, 30, 100, 300, and 1000kT, respectively. Additionally, we also simulated a cluster without primordial binaries and that in which all binaries are replaced by stars with double mass, as references of soft and hard limits, respectively. We found that, in both soft (≤ 3kT) and hard (≥ 1000kT) limits, clusters experiences deep core collapse and shows gravothermal oscillations. On the other hands, in the intermediate hardness (10-300kT), the core collapses halt halfway due an energy releases of the primordial binaries.
Khina, Anatoly
2016-08-15
We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.
Advanced hardware design for error correcting codes
Coussy, Philippe
2015-01-01
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.
Implementation of LT codes based on chaos
International Nuclear Information System (INIS)
Zhou Qian; Li Liang; Chen Zengqiang; Zhao Jiaxiang
2008-01-01
Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator. (general)
An implicit Smooth Particle Hydrodynamic code
Energy Technology Data Exchange (ETDEWEB)
Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)
2000-05-01
An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.
Structural phase transition in some disordered binary alloys
International Nuclear Information System (INIS)
Khan, Haniph; Sharma, K.S.
1998-01-01
The pseudopotential formalism of binary alloys has been used to obtain binding energy of some disordered binary alloys by using the linear potential due to Sharma and Kachhava along with RPA form of screening function. The alloy potential is treated as the linear combination of the potential of the average lattice and the difference potential. The binding energy of Li-Mg, Li-Al, Al-Mg and In-Mg systems has been computed at different atomic concentrations in three possible phases viz. bcc, fcc and hcp. Minimum energy values and phases corresponding to these alloys are obtained. The results obtained show a good agreement with the experimental data as well as with the other theoretical results. (author)
Schroedinger’s code: Source code availability and transparency in astrophysics
Ryan, PW; Allen, Alice; Teuben, Peter
2018-01-01
Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.
Testing theory of binary evolution with interacting binary stars
Ergma, E.; Sarna, M. J.
2002-01-01
Of particular interest to us is the study of mass loss and its influence on the evolution of a binary systems. For this we use theoretical evolutionary models, which include: mass accretion, mass loss, novae explosion, super--efficient wind, and mixing processes. To test our theoretical prediction we proposed to determine the 12C / 13C ratio via measurements of the 12CO and 13CO bands around 2.3 micron. The available observations (Exter at al. 2001, in preparation) show good agreement with the theoretical predictions (Sarna 1992), for Algol-type binaries. Our preliminary estimates of the isotopic ratios for pre-CV's and CV's (Catalan et al. 2000, Dhillon et al. 2001) agree with the theoretical predictions from the common--envelope binary evolution models by Sarna et al. (1995). For the SXT we proposed (Ergma & Sarna 2001) similar observational test, which has not been done yet.
Detecting unresolved binary stars in Euclid VIS images
Kuntzer, T.; Courbin, F.
2017-10-01
Measuring a weak gravitational lensing signal to the level required by the next generation of space-based surveys demands exquisite reconstruction of the point-spread function (PSF). However, unresolved binary stars can significantly distort the PSF shape. In an effort to mitigate this bias, we aim at detecting unresolved binaries in realistic Euclid stellar populations. We tested methods in numerical experiments where (I) the PSF shape is known to Euclid requirements across the field of view; and (II) the PSF shape is unknown. We drew simulated catalogues of PSF shapes for this proof-of-concept paper. Following the Euclid survey plan, the objects were observed four times. We propose three methods to detect unresolved binary stars. The detection is based on the systematic and correlated biases between exposures of the same object. One method is a simple correlation analysis, while the two others use supervised machine-learning algorithms (random forest and artificial neural network). In both experiments, we demonstrate the ability of our methods to detect unresolved binary stars in simulated catalogues. The performance depends on the level of prior knowledge of the PSF shape and the shape measurement errors. Good detection performances are observed in both experiments. Full complexity, in terms of the images and the survey design, is not included, but key aspects of a more mature pipeline are discussed. Finding unresolved binaries in objects used for PSF reconstruction increases the quality of the PSF determination at arbitrary positions. We show, using different approaches, that we are able to detect at least binary stars that are most damaging for the PSF reconstruction process. The code corresponding to the algorithms used in this work and all scripts to reproduce the results are publicly available from a GitHub repository accessible via http://lastro.epfl.ch/software
The effectiveness of correcting codes in reception in the whole in additive normal white noise
Shtarkov, Y. M.
1974-01-01
Some possible criteria for estimating the effectiveness of correcting codes are presented, and the energy effectiveness of correcting codes is studied for symbol-by-symbol reception. Expressions for the energetic effectiveness of binary correcting codes for reception in the whole are produced. Asymptotic energetic effectiveness and finite signal/noise ratio cases are considered.
Shock waves in binary oxides memristors
Tesler, Federico; Tang, Shao; Dobrosavljević, Vladimir; Rozenberg, Marcelo
2017-09-01
Progress of silicon based technology is nearing its physical limit, as minimum feature size of components is reaching a mere 5 nm. The resistive switching behavior of transition metal oxides and the associated memristor device is emerging as a competitive technology for next generation electronics. Significant progress has already been made in the past decade and devices are beginning to hit the market; however, it has been mainly the result of empirical trial and error. Hence, gaining theoretical insight is of essence. In the present work we report a new connection between the resistive switching and shock wave formation, a classic topic of non-linear dynamics. We argue that the profile of oxygen ions that migrate during the commutation in insulating binary oxides may form a shock wave, which propagates through a poorly conductive region of the device. We validate the scenario by means of model simulations.
International linear collider simulations using BDSIM
Indian Academy of Sciences (India)
BDSIM is a Geant4 [1] extension toolkit for the simulation of particle transport in accelerator beamlines. It is a code that combines accelerator-style particle tracking with traditional Geant-style tracking based on Runga–Kutta techniques. A more detailed description of the code can be found in [2]. In an e+e− linear collider ...
Biclustering sparse binary genomic data.
van Uitert, Miranda; Meuleman, Wouter; Wessels, Lodewyk
2008-12-01
Genomic datasets often consist of large, binary, sparse data matrices. In such a dataset, one is often interested in finding contiguous blocks that (mostly) contain ones. This is a biclustering problem, and while many algorithms have been proposed to deal with gene expression data, only two algorithms have been proposed that specifically deal with binary matrices. None of the gene expression biclustering algorithms can handle the large number of zeros in sparse binary matrices. The two proposed binary algorithms failed to produce meaningful results. In this article, we present a new algorithm that is able to extract biclusters from sparse, binary datasets. A powerful feature is that biclusters with different numbers of rows and columns can be detected, varying from many rows to few columns and few rows to many columns. It allows the user to guide the search towards biclusters of specific dimensions. When applying our algorithm to an input matrix derived from TRANSFAC, we find transcription factors with distinctly dissimilar binding motifs, but a clear set of common targets that are significantly enriched for GO categories.
From concatenated codes to graph codes
DEFF Research Database (Denmark)
Justesen, Jørn; Høholdt, Tom
2004-01-01
We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...
Overloaded CDMA Systems with Displaced Binary Signatures
Directory of Open Access Journals (Sweden)
Vanhaverbeke Frederik
2004-01-01
Full Text Available We extend three types of overloaded CDMA systems, by displacing in time the binary signature sequences of these systems: (1 random spreading (PN, (2 multiple-OCDMA (MO, and (3 PN/OCDMA (PN/O. For each of these systems, we determine the time shifts that minimize the overall multiuser interference power. The achievable channel load with coded and uncoded data is evaluated for the conventional (without displacement and improved (with displacement systems, as well as for systems based on quasi-Welch-bound-equality (QWBE sequences, by means of several types of turbo detectors. For each system, the best performing turbo detector is selected in order to compare the performance of these systems. It is found that the improved systems substantially outperform their original counterparts. With uncoded data, (improved PN/O yields the highest acceptable channel load. For coded data, MO allows for the highest acceptable channel load over all considered systems, both for the conventional and the improved systems. In the latter case, channel loads of about 280% are achievable with a low degradation as compared to a single user system.
LFSC - Linac Feedback Simulation Code
Energy Technology Data Exchange (ETDEWEB)
Ivanov, Valentin; /Fermilab
2008-05-01
The computer program LFSC (
The Young Visual Binary Survey
Prato, Lisa; Avilez, Ian; Lindstrom, Kyle; Graham, Sean; Sullivan, Kendall; Biddle, Lauren; Skiff, Brian; Nofi, Larissa; Schaefer, Gail; Simon, Michal
2018-01-01
Differences in the stellar and circumstellar properties of the components of young binaries provide key information about star and disk formation and evolution processes. Because objects with separations of a few to a few hundred astronomical units share a common environment and composition, multiple systems allow us to control for some of the factors which play into star formation. We are completing analysis of a rich sample of about 100 pre-main sequence binaries and higher order multiples, primarily located in the Taurus and Ophiuchus star forming regions. This poster will highlight some of out recent, exciting results. All reduced spectra and the results of our analysis will be publicly available to the community at http://jumar.lowell.edu/BinaryStars/. Support for this research was provided in part by NSF award AST-1313399 and by NASA Keck KPDA funding.
PHYSICS OF ECLIPSING BINARIES. II. TOWARD THE INCREASED MODEL FIDELITY
Energy Technology Data Exchange (ETDEWEB)
Prša, A.; Conroy, K. E.; Horvat, M.; Kochoska, A.; Hambleton, K. M. [Villanova University, Dept. of Astrophysics and Planetary Sciences, 800 E Lancaster Avenue, Villanova PA 19085 (United States); Pablo, H. [Université de Montréal, Pavillon Roger-Gaudry, 2900, boul. Édouard-Montpetit Montréal QC H3T 1J4 (Canada); Bloemen, S. [Radboud University Nijmegen, Department of Astrophysics, IMAPP, P.O. Box 9010, 6500 GL, Nijmegen (Netherlands); Giammarco, J. [Eastern University, Dept. of Astronomy and Physics, 1300 Eagle Road, St. Davids, PA 19087 (United States); Degroote, P. [KU Leuven, Instituut voor Sterrenkunde, Celestijnenlaan 200D, B-3001 Heverlee (Belgium)
2016-12-01
The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures, and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but were not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. In particular, we discuss triangulation as a superior surface discretization algorithm, meshing of rotating single stars, light travel time effects, advanced phase computation, volume conservation in eccentric orbits, and improved computation of local intensity across the stellar surfaces that includes the photon-weighted mode, the enhanced limb darkening treatment, the better reflection treatment, and Doppler boosting. Here we present the concepts on which PHOEBE is built and proofs of concept that demonstrate the increased model fidelity.
Error Control Coding Techniques for Space and Satellite Communications
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Protocols for quantum binary voting
Thapliyal, Kishore; Sharma, Rishi Dutt; Pathak, Anirban
Two new protocols for quantum binary voting are proposed. One of the proposed protocols is designed using a standard scheme for controlled deterministic secure quantum communication (CDSQC), and the other one is designed using the idea of quantum cryptographic switch, which uses a technique known as permutation of particles. A few possible alternative approaches to accomplish the same task (quantum binary voting) have also been discussed. Security of the proposed protocols is analyzed. Further, the efficiencies of the proposed protocols are computed, and are compared with that of the existing protocols. The comparison has established that the proposed protocols are more efficient than the existing protocols.
Matter in compact binary mergers
Read, Jocelyn; LIGO Scientific Collaboration, Virgo Scientific Collaboration
2018-01-01
Mergers of binary neutron stars or neutron-star/black-hole systems are promising targets for gravitational-wave detection. The dynamics of merging compact objects, and thus their gravitational-wave signatures, are primarily determined by the mass and spin of the components. However, the presence of matter can make an imprint on the final orbits and merger of a binary system. I will outline efforts to understand the impact of neutron-star matter on gravitational waves, using both theoretical and computational input, so that gravitational-wave observations can be used to measure the properties of source systems with neutron-star components.
The gravitational-wave memory from eccentric binaries
International Nuclear Information System (INIS)
Favata, Marc
2011-01-01
The nonlinear gravitational-wave memory causes a time-varying but nonoscillatory correction to the gravitational-wave polarizations. It arises from gravitational-waves that are sourced by gravitational-waves. Previous considerations of the nonlinear memory effect have focused on quasicircular binaries. Here I consider the nonlinear memory from Newtonian orbits with arbitrary eccentricity. Expressions for the waveform polarizations and spin-weighted spherical-harmonic modes are derived for elliptic, hyperbolic, parabolic, and radial orbits. In the hyperbolic, parabolic, and radial cases the nonlinear memory provides a 2.5 post-Newtonian (PN) correction to the leading-order waveforms. This is in contrast to the elliptical and quasicircular cases, where the nonlinear memory corrects the waveform at leading (0PN) order. This difference in PN order arises from the fact that the memory builds up over a short ''scattering'' time scale in the hyperbolic case, as opposed to a much longer radiation-reaction time scale in the elliptical case. The nonlinear memory corrections presented here complete our knowledge of the leading-order (Peters-Mathews) waveforms for elliptical orbits. These calculations are also relevant for binaries with quasicircular orbits in the present epoch which had, in the past, large eccentricities. Because the nonlinear memory depends sensitively on the past evolution of a binary, I discuss the effect of this early-time eccentricity on the value of the late-time memory in nearly circularized binaries. I also discuss the observability of large ''memory jumps'' in a binary's past that could arise from its formation in a capture process. Lastly, I provide estimates of the signal-to-noise ratio of the linear and nonlinear memories from hyperbolic and parabolic binaries.
Mental Effort in Binary Categorization Aided by Binary Cues
Botzer, Assaf; Meyer, Joachim; Parmet, Yisrael
2013-01-01
Binary cueing systems assist in many tasks, often alerting people about potential hazards (such as alarms and alerts). We investigate whether cues, besides possibly improving decision accuracy, also affect the effort users invest in tasks and whether the required effort in tasks affects the responses to cues. We developed a novel experimental tool…
Vazquez, A I; Gianola, D; Bates, D; Weigel, K A; Heringstad, B
2009-02-01
Clinical mastitis is typically coded as presence/absence during some period of exposure, and records are analyzed with linear or binary data models. Because presence includes cows with multiple episodes, there is loss of information when a count is treated as a binary response. The Poisson model is designed for counting random variables, and although it is used extensively in epidemiology of mastitis, it has rarely been used for studying the genetics of mastitis. Many models have been proposed for genetic analysis of mastitis, but they have not been formally compared. The main goal of this study was to compare linear (Gaussian), Bernoulli (with logit link), and Poisson models for the purpose of genetic evaluation of sires for mastitis in dairy cattle. The response variables were clinical mastitis (CM; 0, 1) and number of CM cases (NCM; 0, 1, 2, ..). Data consisted of records on 36,178 first-lactation daughters of 245 Norwegian Red sires distributed over 5,286 herds. Predictive ability of models was assessed via a 3-fold cross-validation using mean squared error of prediction (MSEP) as the end-point. Between-sire variance estimates for NCM were 0.065 in Poisson and 0.007 in the linear model. For CM the between-sire variance was 0.093 in logit and 0.003 in the linear model. The ratio between herd and sire variances for the models with NCM response was 4.6 and 3.5 for Poisson and linear, respectively, and for model for CM was 3.7 in both logit and linear models. The MSEP for all cows was similar. However, within healthy animals, MSEP was 0.085 (Poisson), 0.090 (linear for NCM), 0.053 (logit), and 0.056 (linear for CM). For mastitic animals the MSEP values were 1.206 (Poisson), 1.185 (linear for NCM response), 1.333 (logit), and 1.319 (linear for CM response). The models for count variables had a better performance when predicting diseased animals and also had a similar performance between them. Logit and linear models for CM had better predictive ability for healthy
Algebraic coding theory over finite commutative rings
Dougherty, Steven T
2017-01-01
This book provides a self-contained introduction to algebraic coding theory over finite Frobenius rings. It is the first to offer a comprehensive account on the subject. Coding theory has its origins in the engineering problem of effective electronic communication where the alphabet is generally the binary field. Since its inception, it has grown as a branch of mathematics, and has since been expanded to consider any finite field, and later also Frobenius rings, as its alphabet. This book presents a broad view of the subject as a branch of pure mathematics and relates major results to other fields, including combinatorics, number theory and ring theory. Suitable for graduate students, the book will be of interest to anyone working in the field of coding theory, as well as algebraists and number theorists looking to apply coding theory to their own work.
Displacement measurement system for linear array detector
International Nuclear Information System (INIS)
Zhang Pengchong; Chen Ziyu; Shen Ji
2011-01-01
It presents a set of linear displacement measurement system based on encoder. The system includes displacement encoders, optical lens and read out circuit. Displacement read out unit includes linear CCD and its drive circuit, two amplifier circuits, second order Butterworth low-pass filter and the binarization circuit. The coding way is introduced, and various parts of the experimental signal waveforms are given, and finally a linear experimental test results are given. The experimental results are satisfactory. (authors)
Elbow, Peter
1993-01-01
Argues that oppositional thinking, if handled in the right way, will serve as a way to avoid the very problems that Jonathan Culler and Paul de Mann are troubled by: "purity, order, and hierarchy." Asserts that binary thinking can serve to encourage difference--indeed, encourage nondominance, nontranscendence, instability, and disorder.…
Biclustering Sparse Binary Genomic Data
Van Uitert, M.; Meuleman, W.; Wessels, L.F.A.
2008-01-01
Genomic datasets often consist of large, binary, sparse data matrices. In such a dataset, one is often interested in finding contiguous blocks that (mostly) contain ones. This is a biclustering problem, and while many algorithms have been proposed to deal with gene expression data, only two
Misclassification in binary choice models
Czech Academy of Sciences Publication Activity Database
Meyer, B. D.; Mittag, Nikolas
2017-01-01
Roč. 200, č. 2 (2017), s. 295-311 ISSN 0304-4076 R&D Projects: GA ČR(CZ) GJ16-07603Y Institutional support: Progres-Q24 Keywords : measurement error * binary choice models * program take-up Subject RIV: AH - Economics OBOR OECD: Economic Theory Impact factor: 1.633, year: 2016
Misclassification in binary choice models
Czech Academy of Sciences Publication Activity Database
Meyer, B. D.; Mittag, Nikolas
2017-01-01
Roč. 200, č. 2 (2017), s. 295-311 ISSN 0304-4076 Institutional support: RVO:67985998 Keywords : measurement error * binary choice models * program take-up Subject RIV: AH - Economics OBOR OECD: Economic Theory Impact factor: 1.633, year: 2016
BHMcalc: Binary Habitability Mechanism Calculator
Zuluaga, Jorge I.; Mason, Paul; Cuartas-Restrepo, Pablo A.; Clark, Joni
2018-02-01
BHMcalc provides renditions of the instantaneous circumbinary habital zone (CHZ) and also calculates BHM properties of the system including those related to the rotational evolution of the stellar components and the combined XUV and SW fluxes as measured at different distances from the binary. Moreover, it provides numerical results that can be further manipulated and used to calculate other properties.
Armas Padilla, M.
2013-01-01
The discovery of the first X-ray binary, Scorpius X-1, by Giacconi et al. (1962), marked the birth of X-ray astronomy. Following that discovery, many additional X-ray sources where found with the first generation of X-ray rockets and observatories (e.g., UHURU and Einstein). The short-timescale
The Meritfactor of Binary Seqences
DEFF Research Database (Denmark)
Høholdt, Tom
1999-01-01
Binary sequences with small aperiodic correlations play an important role in many applications ranging from radar to modulation and testing of systems. Golay(1977) introduced the merit factor as a measure of the goodness of the sequence and conjectured an upper bound for this. His conjecture...
Joslin, Ronald D.; Streett, Craig L.; Chang, Chau-Lyan
1992-01-01
Spatially evolving instabilities in a boundary layer on a flat plate are computed by direct numerical simulation (DNS) of the incompressible Navier-Stokes equations. In a truncated physical domain, a nonstaggered mesh is used for the grid. A Chebyshev-collocation method is used normal to the wall; finite difference and compact difference methods are used in the streamwise direction; and a Fourier series is used in the spanwise direction. For time stepping, implicit Crank-Nicolson and explicit Runge-Kutta schemes are used to the time-splitting method. The influence-matrix technique is used to solve the pressure equation. At the outflow boundary, the buffer-domain technique is used to prevent convective wave reflection or upstream propagation of information from the boundary. Results of the DNS are compared with those from both linear stability theory (LST) and parabolized stability equation (PSE) theory. Computed disturbance amplitudes and phases are in very good agreement with those of LST (for small inflow disturbance amplitudes). A measure of the sensitivity of the inflow condition is demonstrated with both LST and PSE theory used to approximate inflows. Although the DNS numerics are very different than those of PSE theory, the results are in good agreement. A small discrepancy in the results that does occur is likely a result of the variation in PSE boundary condition treatment in the far field. Finally, a small-amplitude wave triad is forced at the inflow, and simulation results are compared with those of LST. Again, very good agreement is found between DNS and LST results for the 3-D simulations, the implication being that the disturbance amplitudes are sufficiently small that nonlinear interactions are negligible.
Continuous speech recognition with sparse coding
CSIR Research Space (South Africa)
Smit, WJ
2009-04-01
Full Text Available , we show how sparse codes can be used to do continuous speech recognition. We use the TIDIGITS dataset to illustrate the process. First a waveform is transformed into a spectrogram, and a sparse code for the spectrogram is found by means of a linear...
Modeling binary correlated responses using SAS, SPSS and R
Wilson, Jeffrey R
2015-01-01
Statistical tools to analyze correlated binary data are spread out in the existing literature. This book makes these tools accessible to practitioners in a single volume. Chapters cover recently developed statistical tools and statistical packages that are tailored to analyzing correlated binary data. The authors showcase both traditional and new methods for application to health-related research. Data and computer programs will be publicly available in order for readers to replicate model development, but learning a new statistical language is not necessary with this book. The inclusion of code for R, SAS, and SPSS allows for easy implementation by readers. For readers interested in learning more about the languages, though, there are short tutorials in the appendix. Accompanying data sets are available for download through the book s website. Data analysis presented in each chapter will provide step-by-step instructions so these new methods can be readily applied to projects. Researchers and graduate stu...
APPLICATION OF GAS DYNAMICAL FRICTION FOR PLANETESIMALS. II. EVOLUTION OF BINARY PLANETESIMALS
Energy Technology Data Exchange (ETDEWEB)
Grishin, Evgeni; Perets, Hagai B. [Physics Department, Technion—Israel Institute of Technology, Haifa, 3200003 (Israel)
2016-04-01
One of the first stages of planet formation is the growth of small planetesimals and their accumulation into large planetesimals and planetary embryos. This early stage occurs long before the dispersal of most of the gas from the protoplanetary disk. At this stage gas–planetesimal interactions play a key role in the dynamical evolution of single intermediate-mass planetesimals (m{sub p} ∼ 10{sup 21}–10{sup 25} g) through gas dynamical friction (GDF). A significant fraction of all solar system planetesimals (asteroids and Kuiper-belt objects) are known to be binary planetesimals (BPs). Here, we explore the effects of GDF on the evolution of BPs embedded in a gaseous disk using an N-body code with a fiducial external force accounting for GDF. We find that GDF can induce binary mergers on timescales shorter than the disk lifetime for masses above m{sub p} ≳ 10{sup 22} g at 1 au, independent of the binary initial separation and eccentricity. Such mergers can affect the structure of merger-formed planetesimals, and the GDF-induced binary inspiral can play a role in the evolution of the planetesimal disk. In addition, binaries on eccentric orbits around the star may evolve in the supersonic regime, where the torque reverses and the binary expands, which would enhance the cross section for planetesimal encounters with the binary. Highly inclined binaries with small mass ratios, evolve due to the combined effects of Kozai–Lidov (KL) cycles with GDF which lead to chaotic evolution. Prograde binaries go through semi-regular KL evolution, while retrograde binaries frequently flip their inclination and ∼50% of them are destroyed.
Interactions in Massive Colliding Wind Binaries
Directory of Open Access Journals (Sweden)
Michael F. Corcoran
2012-03-01
Full Text Available There are observational difficulties determining dynamical masses of binary star components in the upper HR diagram both due to the scarcity of massive binary systems and spectral and photometric contamination produced by the strong wind outflows in these systems. We discuss how variable X-ray emission in these systems produced by wind-wind collisions in massive binaries can be used to constrain the system parameters, with application to two important massive binaries, Eta Carinae and WR 140.
LFSC - Linac Feedback Simulation Code
International Nuclear Information System (INIS)
Ivanov, Valentin; Fermilab
2008-01-01
The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output
Fluctuations and Linear Response in Supercooled Liquids
DEFF Research Database (Denmark)
Nielsen, Johannes K.
Fluctuation dissipation theorems are derived for thermodynamic properties like frequency dependent specific heat and compressibility. First the case where a systems dynamics are restricted by constant volume and energy is considered. The dynamic linear response to a heat pulse and a volume change...... of the theory in the field of supercooled liquids are showed. First the full frequency dependent thermodynamic response matrix is extracted from simulations of a binary Lennard Jones liquid. Secondly some simple stochastic models of supercooled liquids are analysed in the framework of linear thermodynamic...
Automatic coding method of the ACR Code
International Nuclear Information System (INIS)
Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi
1993-01-01
The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology
Stability criterion for a light binary attracted by a heavy body
Vasilkova, O. O.
2010-03-01
Dynamical behaviour of a small binary with equal components, each of mass m, is considered under attraction of a heavy body of mass M. Differential equations of the general three-body problem are integrated numerically using the code by S. J. Aarseth (Aarseth, Zare 1974) for mass ratios m/M within 10-11-10-4 range. The direct and retrograde orbits of light bodies about each other are considered which lie either in the plane of moving their center of mass or in the plane perpendicular to it. It is shown numerically that the critical separation between the binary components which leads to disruption of binary is proportional to ( m/M)1/3. The criterion can be used for studying (in the first approximation) the motion of double stars and binary asteroids or computing the parameters of magnetic monopol and antimonopol pairs.
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
A New Method Of Gene Coding For A Genetic Algorithm Designed For Parametric Optimization
Directory of Open Access Journals (Sweden)
Radu BELEA
2003-12-01
Full Text Available In a parametric optimization problem the genes code the real parameters of the fitness function. There are two coding techniques known under the names of: binary coded genes and real coded genes. The comparison between these two is a controversial subject since the first papers about parametric optimization have appeared. An objective analysis regarding the advantages and disadvantages of the two coding techniques is difficult to be done while different format information is compared. The present paper suggests a gene coding technique that uses the same format for both binary coded genes and for the real coded genes. After unifying the real parameters representation, the next criterion is going to be applied: the differences between the two techniques are statistically measured by the effect of the genetic operators over some random generated fellows.
Fundamentals of convolutional coding
Johannesson, Rolf
2015-01-01
Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
GPU accelerated manifold correction method for spinning compact binaries
Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying
2018-04-01
The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.
Emission-line diagnostics of nearby H II regions including interacting binary populations
Xiao, Lin; Stanway, Elizabeth R.; Eldridge, J. J.
2018-03-01
We present numerical models of the nebular emission from H II regions around young stellar populations over a range of compositions and ages. The synthetic stellar populations include both single stars and interacting binary stars. We compare these models to the observed emission lines of 254 H II regions of 13 nearby spiral galaxies and 21 dwarf galaxies drawn from archival data. The models are created using the combination of the Binary Population and Spectral Synthesis (BPASS) code with the photoionization code CLOUDY to study the differences caused by the inclusion of interacting binary stars in the stellar population. We obtain agreement with the observed emission line ratios from the nearby star-forming regions and discuss the effect of binary star evolution pathways on the nebular ionization of H II regions. We find that at population ages above 10 Myr, single-star models rapidly decrease in flux and ionization strength, while binary-star models still produce strong flux and high [O III]/Hβ ratios. Our models can reproduce the metallicity of H II regions from spiral galaxies but we find higher metallicities than previously estimated for the H II regions from dwarf galaxies. Comparing the equivalent width of Hβ emission between models and observations, we find that accounting for ionizing photon leakage can affect age estimates for H II regions. When it is included, the typical age derived for H II regions is 5 Myr from single-star models, and up to 10 Myr with binary-star models. This is due to the existence of binary-star evolution pathways which produce more hot WR and helium stars at older ages. For future reference, we calculate new BPASS binary maximal starburst lines as a function of metallicity, and for the total model population, and present these in Appendix A.
Permutation Entropy for Random Binary Sequences
Directory of Open Access Journals (Sweden)
Lingfeng Liu
2015-12-01
Full Text Available In this paper, we generalize the permutation entropy (PE measure to binary sequences, which is based on Shannon’s entropy, and theoretically analyze this measure for random binary sequences. We deduce the theoretical value of PE for random binary sequences, which can be used to measure the randomness of binary sequences. We also reveal the relationship between this PE measure with other randomness measures, such as Shannon’s entropy and Lempel–Ziv complexity. The results show that PE is consistent with these two measures. Furthermore, we use PE as one of the randomness measures to evaluate the randomness of chaotic binary sequences.
Physical Parameters of the Overcontact Binary AH Cnc in the Old Open Cluster M 67
Yakut, K.; Aerts, C.C.
2006-01-01
We present a photometric study of the overcontact binary AH Cnc. The CCD observations were done with the Russian-Turkish 1.5 m telescope and the light-curve was solved with the Wilson-Devinney code. The physical parameters of the components have been deduced as M_{1} = 1.22 Msun,
The use of hyperspectral data for tree species discrimination: Combining binary classifiers
CSIR Research Space (South Africa)
Dastile, X
2010-11-01
Full Text Available ). A review on the combination of binary classifiers in multiclass problems. Springer science and Business Media B.V [7] Dietterich T.G and Bakiri G.(1995). Solving Multiclass Learning Problem via Error-Correcting Output Codes. AI Access Foundation...
PatternCoder: A Programming Support Tool for Learning Binary Class Associations and Design Patterns
Paterson, J. H.; Cheng, K. F.; Haddow, J.
2009-01-01
PatternCoder is a software tool to aid student understanding of class associations. It has a wizard-based interface which allows students to select an appropriate binary class association or design pattern for a given problem. Java code is then generated which allows students to explore the way in which the class associations are implemented in a…
Information sets as permutation cycles for quadratic residue codes
Directory of Open Access Journals (Sweden)
Richard A. Jenson
1982-01-01
Full Text Available The two cases p=7 and p=23 are the only known cases where the automorphism group of the [p+1, (p+1/2] extended binary quadratic residue code, O(p, properly contains PSL(2,p. These codes have some of their information sets represented as permutation cycles from Aut(Q(p. Analysis proves that all information sets of Q(7 are so represented but those of Q(23 are not.
Utility subroutine package used by Applied Physics Division export codes
International Nuclear Information System (INIS)
Adams, C.H.; Derstine, K.L.; Henryson, H. II; Hosteny, R.P.; Toppel, B.J.
1983-04-01
This report describes the current state of the utility subroutine package used with codes being developed by the staff of the Applied Physics Division. The package provides a variety of useful functions for BCD input processing, dynamic core-storage allocation and managemnt, binary I/0 and data manipulation. The routines were written to conform to coding standards which facilitate the exchange of programs between different computers
Gravity waves from relativistic binaries
Levin, Janna; O'Reilly, Rachel; Copeland, E. J.
1999-01-01
The stability of binary orbits can significantly shape the gravity wave signal which future Earth-based interferometers hope to detect. The inner most stable circular orbit has been of interest as it marks the transition from the late inspiral to final plunge. We consider purely relativistic orbits beyond the circular assumption. Homoclinic orbits are of particular importance to the question of stability as they lie on the boundary between dynamical stability and instability. We identify thes...
Gravitational waves from orbiting binaries without general relativity
Hilborn, Robert C.
2018-03-01
Using analogies with electromagnetic radiation, we present a calculation of the properties of gravitational radiation emitted by orbiting binary objects. The calculation produces results that have the same dependence on the masses of the orbiting objects, the orbital frequency, and the mass separation as do the results from the linear version of general relativity (GR). However, the calculation yields polarization, angular distributions, and overall power results that differ from those of GR. Nevertheless, the calculation produces waveforms that are very similar to those observed by the Laser Interferometer Gravitational-Wave Observatory (LIGO-VIRGO) gravitational wave collaboration in 2015 up to the point at which the binary merger occurs. The details of the calculation should be understandable by upper-level physics students and physicists who are not experts in GR.
Quantum memory receiver for superadditive communication using binary coherent states.
Klimek, Aleksandra; Jachura, Michał; Wasilewski, Wojciech; Banaszek, Konrad
2016-11-12
We propose a simple architecture based on multimode quantum memories for collective readout of classical information keyed using a pair coherent states, exemplified by the well-known binary phase shift keying format. Such a configuration enables demonstration of the superadditivity effect in classical communication over quantum channels, where the transmission rate becomes enhanced through joint detection applied to multiple channel uses. The proposed scheme relies on the recently introduced idea to prepare Hadamard sequences of input symbols that are mapped by a linear optical transformation onto the pulse position modulation format [Guha, S. Phys. Rev. Lett. 2011 , 106 , 240502]. We analyze two versions of readout based on direct detection and an optional Dolinar receiver which implements the minimum-error measurement for individual detection of a binary coherent state alphabet.
Memorizing binary vector sequences by a sparsely encoded network.
Baram, Y
1994-01-01
We present a neural network employing Hebbian storage and sparse internal coding, which is capable of memorizing and correcting sequences of binary vectors by association. A ternary version of the Kanerva memory, folded into a feedback configuration, is shown to perform the basic sequence memorization and regeneration function. The inclusion of lateral connections between the internal cells increases the network capacity considerably and facilitates the correction of individual input patterns and the detection of large errors. The introduction of higher delays in the transmission lines between the external input-output layer and the internal memory layer is shown to further improve the network's error correction capability.
Simulations of Tidally Driven Formation of Binary Planet Systems
Murray, R. Zachary P.; Guillochon, James
2018-01-01
In the last decade there have been hundreds of exoplanets discovered by the Kepler, CoRoT and many other initiatives. This wealth of data suggests the possibility of detecting exoplanets with large satellites. This project seeks to model the interactions between orbiting planets using the FLASH hydrodynamics code developed by The Flash Center for Computational Science at University of Chicago. We model the encounters in a wide variety of encounter scenarios and initial conditions including variations in encounter depth, mass ratio, and encounter velocity and attempt to constrain what sorts of binary planet configurations are possible and stable.
Time efficient signed Vedic multiplier using redundant binary representation
Directory of Open Access Journals (Sweden)
Ranjan Kumar Barik
2017-03-01
Full Text Available This study presents a high-speed signed Vedic multiplier (SVM architecture using redundant binary (RB representation in Urdhva Tiryagbhyam (UT sutra. This is the first ever effort towards extension of Vedic algorithms to the signed numbers. The proposed multiplier architecture solves the carry propagation issue in UT sutra, as carry free addition is possible in RB representation. The proposed design is coded in VHDL and synthesised in Xilinx ISE 14.4 of various FPGA devices. The proposed SVM architecture has better speed performances as compared with various state-of-the-art conventional as well as Vedic architectures.
Estimating Mass Parameters of Doubly Synchronous Binary Asteroids
Davis, Alex; Scheeres, Daniel J.
2017-10-01
The non-spherical mass distributions of binary asteroid systems lead to coupled mutual gravitational forces and torques. Observations of the coupled attitude and orbital dynamics can be leveraged to provide information about the mass parameters of the binary system. The full 3-dimensional motion has 9 degrees of freedom, and coupled dynamics require the use of numerical investigation only. In the current study we simplify the system to a planar ellipsoid-ellipsoid binary system in a doubly synchronous orbit. Three modes are identified for the system, which has 4 degrees of freedom, with one degree of freedom corresponding to an ignorable coordinate. The three modes correspond to the three major librational modes of the system when it is in a doubly synchronous orbit. The linearized periods of each mode are a function of the mass parameters of the two asteroids, enabling measurement of these parameters based on observations of the librational motion. Here we implement estimation techniques to evaluate the capabilities of this mass measurement method. We apply this methodology to the Trojan binary asteroid system 617 Patroclus and Menoetius (1906 VY), the final flyby target of the recently announced LUCY Discovery mission. This system is of interest because a stellar occultation campaign of the Patroclus and Menoetius system has suggested that the asteroids are similarly sized oblate ellipsoids moving in a doubly-synchronous orbit, making the system an ideal test for this investigation. A number of missed observations during the campaign also suggested the possibility of a crater on the southern limb of Menoetius, the presence of which could be evaluated by our mass estimation method. This presentation will review the methodology and potential accuracy of our approach in addition to evaluating how the dynamical coupling can be used to help understand light curve and stellar occultation observations for librating binary systems.
Binary evolution and observational constraints
International Nuclear Information System (INIS)
Loore, C. de
1984-01-01
The evolution of close binaries is discussed in connection with problems concerning mass and angular momentum losses. Theoretical and observational evidence for outflow of matter, leaving the system during evolution is given: statistics on total masses and mass ratios, effects of the accretion of the mass gaining component, the presence of streams, disks, rings, circumstellar envelopes, period changes, abundance changes in the atmosphere. The effects of outflowing matter on the evolution is outlined, and estimates of the fraction of matter expelled by the loser, and leaving the system, are given. The various time scales involved with evolution and observation are compared. Examples of non conservative evolution are discussed. Problems related to contact phases, on mass and energy losses, in connection with entropy changes are briefly analysed. For advanced stages the disruption probabilities for supernova explosions are examined. A global picture is given for the evolution of massive close binaries, from ZAMS, through WR phases, X-ray phases, leading to runaway pulsars or to a binary pulsar and later to a millisecond pulsar. (Auth.)
Regularized robust coding for face recognition.
Yang, Meng; Zhang, Lei; Yang, Jian; Zhang, David
2013-05-01
Recently the sparse representation based classification (SRC) has been proposed for robust face recognition (FR). In SRC, the testing image is coded as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1 -norm of the coding residual. Such a sparse coding model assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be effective enough to describe the coding residual in practical FR systems. Meanwhile, the sparsity constraint on the coding coefficients makes the computational cost of SRC very high. In this paper, we propose a new face coding model, namely regularized robust coding (RRC), which could robustly regress a given signal with regularized regression coefficients. By assuming that the coding residual and the coding coefficient are respectively independent and identically distributed, the RRC seeks for a maximum a posterior solution of the coding problem. An iteratively reweighted regularized robust coding (IR(3)C) algorithm is proposed to solve the RRC model efficiently. Extensive experiments on representative face databases demonstrate that the RRC is much more effective and efficient than state-of-the-art sparse representation based methods in dealing with face occlusion, corruption, lighting, and expression changes, etc.
Efficient Coding of Shape and Transparency for Video Objects
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren
2007-01-01
the strong correlation with the binary shape layer by morphological erosion operations. Finally, three solutions are proposed for coding the intermediate layer. The knowledge of the two previously encoded layers is utilized in order to increase compression efficiency. Experimental results are reported...
Preliminary design of a Binary Breeder Reactor
International Nuclear Information System (INIS)
Garcia C, E. Y.; Francois, J. L.; Lopez S, R. C.
2014-10-01
A binary breeder reactor (BBR) is a reactor that by means of the transmutation and fission process can operates through the depleted uranium burning with a small quantity of fissile material. The advantages of a BBR with relation to other nuclear reactor types are numerous, taking into account their capacity to operate for a long time without requiring fuel reload or re-arrangement. In this work four different simulations are shown carried out with the MCNPX code with libraries Jeff-3.1 to 1200 K. The objective of this study is to compare two different models of BBR: a spherical reactor and a cylindrical one, using two fuel cycles for each one of them (U-Pu and Th-U) and different reflectors for the two different geometries. For all the models a super-criticality state was obtained at least 10.9 years without carrying out some fuel re-arrangement or reload. The plutonium-239 production was achieved in the models where natural uranium was used in the breeding area, while the production of uranium-233 was observed in the cases where thorium was used in the fertile area. Finally, a behavior of stationary wave reactor was observed inside the models of spherical reactor when contemplating the power uniform increment in the breeding area, while inside the cylindrical models was observed the behavior of a traveling wave reactor when registering the displacement of the burnt wave along the cylindrical model. (Author)
Amplitude holographic LPCC filters for 4-f correlator: variants of binary realization
Evtikhiev, N. N.; Zlokazov, E. Yu.; Starikov, S. N.; Starikov, R. S.; Shaulskiy, D. V.
2010-10-01
Invariant correlation filters application is the method to achieve invariance of image recognition in presence of input object distortions. Composite filter with linear phase coefficients (LPCC filter) is one of the perspective types of correlation filters. LPCCF can be realized in a scheme of optoelectronic Vander Lught's correlator as synthesized holographic filters for recognition in real time conditions. Application of binary spatial light modulators for realization of holographic LPCCF is especially interesting. Variants of "pixel to pixel" binarization methods or representation of grayscale gradation using binary "subpixel" raster can be used for binary representation of the initial hologram. The results of correlation recognition with binary amplitude holographic LPCCF application are represented in the paper.
Cracking the chromatin code: Precise rule of nucleosome positioning
Trifonov, Edward N.
2011-03-01
Various aspects of packaging DNA in eukaryotic cells are outlined in physical rather than biological terms. The informational and physical nature of packaging instructions encoded in DNA sequences is discussed with the emphasis on signal processing difficulties - very low signal-to-noise ratio and high degeneracy of the nucleosome positioning signal. As the author has been contributing to the field from its very onset in 1980, the review is mostly focused at the works of the author and his colleagues. The leading concept of the overview is the role of deformational properties of DNA in the nucleosome positioning. The target of the studies is to derive the DNA bendability matrix describing where along the DNA various dinucleotide elements should be positioned, to facilitate its bending in the nucleosome. Three different approaches are described leading to derivation of the DNA deformability sequence pattern, which is a simplified linear presentation of the bendability matrix. All three approaches converge to the same unique sequence motif CGRAAATTTYCG or, in binary form, YRRRRRYYYYYR, both representing the chromatin code.
On Field Size and Success Probability in Network Coding
DEFF Research Database (Denmark)
Geil, Hans Olav; Matsumoto, Ryutaroh; Thomsen, Casper
2008-01-01
Using tools from algebraic geometry and Gröbner basis theory we solve two problems in network coding. First we present a method to determine the smallest field size for which linear network coding is feasible. Second we derive improved estimates on the success probability of random linear network...
Universal data-based method for reconstructing complex networks with binary-state dynamics
Li, Jingwen; Shen, Zhesi; Wang, Wen-Xu; Grebogi, Celso; Lai, Ying-Cheng
2017-03-01
To understand, predict, and control complex networked systems, a prerequisite is to reconstruct the network structure from observable data. Despite recent progress in network reconstruction, binary-state dynamics that are ubiquitous in nature, technology, and society still present an outstanding challenge in this field. Here we offer a framework for reconstructing complex networks with binary-state dynamics by developing a universal data-based linearization approach that is applicable to systems with linear, nonlinear, discontinuous, or stochastic dynamics governed by monotonic functions. The linearization procedure enables us to convert the network reconstruction into a sparse signal reconstruction problem that can be resolved through convex optimization. We demonstrate generally high reconstruction accuracy for a number of complex networks associated with distinct binary-state dynamics from using binary data contaminated by noise and missing data. Our framework is completely data driven, efficient, and robust, and does not require any a priori knowledge about the detailed dynamical process on the network. The framework represents a general paradigm for reconstructing, understanding, and exploiting complex networked systems with binary-state dynamics.
The Accurate Particle Tracer Code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Performance Tuning of x86 OpenMP Codes with MAQAO
Barthou, Denis; Charif Rubial, Andres; Jalby, William; Koliai, Souad; Valensi, Cédric
Failing to find the best optimization sequence for a given application code can lead to compiler generated codes with poor performances or inappropriate code. It is necessary to analyze performances from the assembly generated code to improve over the compilation process. This paper presents a tool for the performance analysis of multithreaded codes (OpenMP programs support at the moment). MAQAO relies on static performance evaluation to identify compiler optimizations and assess performance of loops. It exploits static binary rewriting for reading and instrumenting object files or executables. Static binary instrumentation allows the insertion of probes at instruction level. Memory accesses can be captured to help tune the code, but such traces require to be compressed. MAQAO can analyze the results and provide hints for tuning the code. We show on some examples how this can help users improve their OpenMP applications.
Optimal design strategies for sibling studies with binary exposures.
Li, Zhigang; McKeague, Ian W; Lumey, Lambert H
2014-01-01
Sibling studies have become increasingly popular because they provide better control over confounding by unmeasured family-level risk factors than can be obtained in standard cohort studies. However, little attention has been devoted to the development of efficient design strategies for sibling studies in terms of optimizing power. We here address this issue in commonly encountered types of sibling studies, allowing for continuous and binary outcomes and varying numbers of exposed and unexposed siblings. For continuous outcomes, we show that in families with sibling pairs, optimal study power is obtained by recruiting discordant (exposed-control) pairs of siblings. More generally, balancing the exposure status within each family as evenly as possible is shown to be optimal. For binary outcomes, we elucidate how the optimal strategy depends on the variation of the binary response; as the within-family correlation increases, the optimal strategy tends toward only recruiting discordant sibling pairs (as in the case of continuous outcomes). R code for obtaining the optimal strategies is included.
Decimal multiplication using compressor based-BCD to binary converter
Directory of Open Access Journals (Sweden)
Sasidhar Mukkamala
2018-02-01
Full Text Available The objective of this work is to implement a scalable decimal to binary converter from 8 to 64 bits (i.e 2-digit to 16-digit using parallel architecture. The proposed converters, along with binary coded decimal (BCD adder and binary to BCD converters, are used in parallel implementation of Urdhva Triyakbhyam (UT-based 32-bit BCD multiplier. To increase the performance, compressor circuits were used in converters and multiplier. The designed hardware circuits were verified by behavioural and post layout simulations. The implementation was carried out using Virtex-6 Field Programmable Gate Array (FPGA and Application Specific Integrated Circuit (ASIC with 90-nm technology library platforms. The results on FPGA shows that compressor based converters and multipliers produced less amount of propagation delay with a slight increase of hardware resources. In case of ASIC implementation, a compressor based converter delay is equivalent to conventional converter with a slight increase of gate count. However, the reduction of delay is evident in case of compressor based multiplier.
Object tracking on mobile devices using binary descriptors
Savakis, Andreas; Quraishi, Mohammad Faiz; Minnehan, Breton
2015-03-01
With the growing ubiquity of mobile devices, advanced applications are relying on computer vision techniques to provide novel experiences for users. Currently, few tracking approaches take into consideration the resource constraints on mobile devices. Designing efficient tracking algorithms and optimizing performance for mobile devices can result in better and more efficient tracking for applications, such as augmented reality. In this paper, we use binary descriptors, including Fast Retina Keypoint (FREAK), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Independent Features (BRIEF), and Binary Robust Invariant Scalable Keypoints (BRISK) to obtain real time tracking performance on mobile devices. We consider both Google's Android and Apple's iOS operating systems to implement our tracking approach. The Android implementation is done using Android's Native Development Kit (NDK), which gives the performance benefits of using native code as well as access to legacy libraries. The iOS implementation was created using both the native Objective-C and the C++ programing languages. We also introduce simplified versions of the BRIEF and BRISK descriptors that improve processing speed without compromising tracking accuracy.
Weighted effect coding for observational data with wec
Nieuwenhuis, R.; Grotenhuis, H.F. te; Pelzer, B.J.
2017-01-01
Weighted effect coding refers to a specific coding matrix to include factor variables in generalised linear regression models. With weighted effect coding, the effect for each category represents the deviation of that category from the weighted mean (which corresponds to the sample mean). This
Linear latent variable models: the lava-package
DEFF Research Database (Denmark)
Holst, Klaus Kähler; Budtz-Jørgensen, Esben
2013-01-01
An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...... are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation...
A comparison of nuclear power systems for Brazil using plutonium and binary cycles
International Nuclear Information System (INIS)
Ishiguro, Y.; Fernandes, J.E.
1985-01-01
Nuclear power systems based on plutonium cycle and binary cycle are compared taking into account natural uranium demand and reactor combination. The systems start with PWR type reactors (U5/U8) and change to systems composed exclusively of FBR type reactors or PWR-FBR symbiotic systems. Four loading modes are considered for the PWR and two for the FBR. The FBR is either a LMFBR loaded with PU/U or a LMFBR loaded the binary way. A linear and a non-linear capacity growth and two different criteria for the FBR introduction are considered. The results show that a 100 GWe permanent system can be established in 50 years in all cases, based on 300000 t of natural uranium and in case of delay in the FBR introduction and if a thermal-fast symbiotic system is chosen, a binary cycle could be more advantageous than a plutonium cycle. (F.E.) [pt
Odor concentration invariance by chemical ratio coding
Directory of Open Access Journals (Sweden)
Naoshige Uchida
2008-08-01
Full Text Available Many animal species rely on chemical signals to extract ecologically important information from the environment. Yet in natural conditions chemical signals will frequently undergo concentration changes that produce differences in both level and pattern of activation of olfactory receptor neurons. Thus, a central problem in olfactory processing is how the system is able to recognize the same stimulus across different concentrations. To signal species identity for mate recognition, some insects use the ratio of two components in a binary chemical mixture to produce a code that is invariant to dilution. Here, using psychophysical methods, we show that rats also classify binary odor mixtures according to the molar ratios of their components, spontaneously generalizing over at least a tenfold concentration range. These results indicate that extracting chemical ratio information is not restricted to pheromone signaling and suggest a general solution for concentration-invariant odor recognition by the mammalian olfactory system.
Parallel beam dynamics simulation of linear accelerators
International Nuclear Information System (INIS)
Qiang, Ji; Ryne, Robert D.
2002-01-01
In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies
Development of simple fixed linear predictors for use in speech ...
African Journals Online (AJOL)
Development of simple fixed linear predictors for use in speech compression. ... A very popular method used for compression is Linear Prediction Coding (LPC), by using the Linear Prediction Model. The development of simple ... Various speech signals are used to test the performance of both filters within a DPCM system.
Adaptive residual DPCM for lossless intra coding
Cai, Xun; Lim, Jae S.
2015-03-01
In the Differential Pulse-code Modulation (DPCM) image coding, the intensity of a pixel is predicted as a linear combination of a set of surrounding pixels and the prediction error is encoded. In this paper, we propose the adaptive residual DPCM (ARDPCM) for intra lossless coding. In the ARDPCM, intra residual samples are predicted using adaptive mode-dependent DPCM weights. The weights are estimated by minimizing the Mean Squared Error (MSE) of coded data and they are synchronized at the encoder and the decoder. The proposed method is implemented on the High Efficiency Video Coding (HEVC) reference software. Experimental results show that the ARDPCM significantly outperforms HEVC lossless coding and HEVC with the DPCM. The proposed method is also computationally efficient.
Low Complexity List Decoding for Polar Codes with Multiple CRC Codes
Directory of Open Access Journals (Sweden)
Jong-Hwan Kim
2017-04-01
Full Text Available Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths.
The structures of binary compounds
Hafner, J; Jensen, WB; Majewski, JA; Mathis, K; Villars, P; Vogl, P; de Boer, FR
1990-01-01
- Up-to-date compilation of the experimental data on the structures of binary compounds by Villars and colleagues. - Coloured structure maps which order the compounds into their respective structural domains and present for the first time the local co-ordination polyhedra for the 150 most frequently occurring structure types, pedagogically very helpful and useful in the search for new materials with a required crystal structure. - Crystal co-ordination formulas: a flexible notation for the interpretation of solid-state structures by chemist Bill Jensen. - Recent important advances in unders
Young and Waltzing Binary Stars
2001-10-01
ADONIS Observes Low-mass Eclipsing System in Orion Summary A series of very detailed images of a binary system of two young stars have been combined into a movie . In merely 3 days, the stars swing around each other. As seen from the earth, they pass in front of each other twice during a full revolution, producing eclipses during which their combined brightness diminishes . A careful analysis of the orbital motions has now made it possible to deduce the masses of the two dancing stars . Both turn out to be about as heavy as our Sun. But while the Sun is about 4500 million years old, these two stars are still in their infancy. They are located some 1500 light-years away in the Orion star-forming region and they probably formed just 10 million years ago . This is the first time such an accurate determination of the stellar masses could be achieved for a young binary system of low-mass stars . The new result provides an important piece of information for our current understanding of how young stars evolve. The observations were obtained by a team of astronomers from Italy and ESO [1] using the ADaptive Optics Near Infrared System (ADONIS) on the 3.6-m telescope at the ESO La Silla Observatory. PR Photo 29a/01 : The RXJ 0529.4+0041 system before primary eclipse PR Photo 29b/01 : The RXJ 0529.4+0041 system at mid-primary eclipse PR Photo 29c/01 : The RXJ 0529.4+0041 system after primary eclipse PR Photo 29d/01 : The RXJ 0529.4+0041 system before secondary eclipse PR Photo 29e/01 : The RXJ 0529.4+0041 system at mid-secondary eclipse PR Photo 29f/01 : The RXJ 0529.4+0041 system after secondary eclipse PR Video Clip 06/01 : Video of the RXJ 0529.4+0041 system Binary stars and stellar masses Since some time, astronomers have noted that most stars seem to form in binary or multiple systems. This is quite fortunate, as the study of binary stars is the only way in which it is possible to measure directly one of the most fundamental quantities of a star, its mass. The mass of a
Pulsar magnetospheres in binary systems
Ershkovich, A. I.; Dolan, J. F.
1985-01-01
The criterion for stability of a tangential discontinuity interface in a magnetized, perfectly conducting inviscid plasma is investigated by deriving the dispersion equation including the effects of both gravitational and centrifugal acceleration. The results are applied to neutron star magnetospheres in X-ray binaries. The Kelvin-Helmholtz instability appears to be important in determining whether MHD waves of large amplitude generated by instability may intermix the plasma effectively, resulting in accretion onto the whole star as suggested by Arons and Lea and leading to no X-ray pulsar behavior.
Tomographic reconstruction of binary fields
International Nuclear Information System (INIS)
Roux, Stéphane; Leclerc, Hugo; Hild, François
2012-01-01
A novel algorithm is proposed for reconstructing binary images from their projection along a set of different orientations. Based on a nonlinear transformation of the projection data, classical back-projection procedures can be used iteratively to converge to the sought image. A multiscale implementation allows for a faster convergence. The algorithm is tested on images up to 1 Mb definition, and an error free reconstruction is achieved with a very limited number of projection data, saving a factor of about 100 on the number of projections required for classical reconstruction algorithms.
Hydrodynamic 'memory' of binary fluid mixtures
International Nuclear Information System (INIS)
Kalashnik, M. V.; Ingel, L. Kh.
2006-01-01
A theoretical analysis is presented of hydrostatic adjustment in a two-component fluid system, such as seawater stratified with respect to temperature and salinity. Both linear approximation and nonlinear problem are investigated. It is shown that scenarios of relaxation to a hydrostatically balanced state in binary fluid mixtures may substantially differ from hydrostatic adjustment in fluids that can be stratified only with respect to temperature. In particular, inviscid two-component fluids have 'memory': a horizontally nonuniform disturbance in the initial temperature or salinity distribution does not vanish even at the final stage, transforming into a persistent thermohaline 'trace.' Despite stability of density stratification and convective stability of the fluid system by all known criteria, an initial temperature disturbance may not decay and may even increase in amplitude. Moreover, its sign may change (depending on the relative contributions of temperature and salinity to stable background density stratification). Hydrostatic adjustment may involve development of discontinuous distributions from smooth initial temperature or concentration distributions. These properties of two-component fluids explain, in particular, the occurrence of persistent horizontally or vertically nonuniform temperature and salinity distributions in the ocean, including discontinuous ones
... News Physician Resources Professions Site Index A-Z Linear Accelerator A linear accelerator (LINAC) customizes high energy x-rays or ... ensured? What is this equipment used for? A linear accelerator (LINAC) is the device most commonly used ...
Wang, Xulong; Philip, Vivek M; Ananda, Guruprasad; White, Charles C; Malhotra, Ankit; Michalski, Paul J; Karuturi, Krishna R Murthy; Chintalapudi, Sumana R; Acklin, Casey; Sasner, Michael; Bennett, David A; De Jager, Philip L; Howell, Gareth R; Carter, Gregory W
2018-03-05
Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized linear mixed model (GLMM) in a Bayesian framework, called Bayes-GLMM Bayes-GLMM has four major features: (1) support of categorical, binary and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo (MCMC) sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer's Disease Sequencing Project (ADSP). This study contains 570 individuals from 111 families, each with Alzheimer's disease diagnosed at one of four confidence levels. With Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer's disease. Two variants, rs140233081 and rs149372995 lie between PRKAR1B and PDGFA The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with AD-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed model approach in a Bayesian framework for association studies. Copyright © 2018, Genetics.
Self-orthogonal codes from some bush-type Hadamard matrices ...
African Journals Online (AJOL)
By means of a construction method outlined by Harada and Tonchev, we determine some non-binary self-orthogonal codes obtained from the row span of orbit matrices of Bush-type Hadamard matrices that admit a xed-point-free and xed-block-free automorphism of prime order. We show that the code [20; 15; 4]5 obtained ...
Microlensing Signature of Binary Black Holes
Schnittman, Jeremy; Sahu, Kailash; Littenberg, Tyson
2012-01-01
We calculate the light curves of galactic bulge stars magnified via microlensing by stellar-mass binary black holes along the line-of-sight. We show the sensitivity to measuring various lens parameters for a range of survey cadences and photometric precision. Using public data from the OGLE collaboration, we identify two candidates for massive binary systems, and discuss implications for theories of star formation and binary evolution.
Abraham, Nikhil
2015-01-01
Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill
Gao, Wen
2015-01-01
This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV
Hydropower Bidding Using Linearized Start-Ups
Directory of Open Access Journals (Sweden)
Ellen Krohn Aasgård
2017-11-01
Full Text Available Hydropower producers must submit bids to electricity market auctions where they state their willingness to produce power. These bids may be determined using a mixed-integer linear stochastic program. However, for large interconnected river systems, this program may be too complex to be solved within the time limits set by current market rules. This paper investigates whether a linear approximation to start-ups can be used to reduce the computational burden without significantly degrading the solution quality. In order to investigate the trade-off of time versus solution quality, linear approximation is compared to a formulation that uses binary variables in a case study that simulates the operation of a reservoir system over time.
Electrohydrodynamics of binary electrolytes driven by modulated surface potentials
DEFF Research Database (Denmark)
Mortensen, Asger; Olesen, Laurits Højgaard; Belmon, L.
2005-01-01
We study the electrohydrodynamics of the Debye screening layer that arises in an aqueous binary solution near a planar insulating wall when applying a spatially modulated ac voltage. Combining this with first order perturbation theory we establish the governing equations for the full nonequilibrium...... problem and obtain analytic solutions in the bulk for the pressure and velocity fields of the electrolyte and for the electric potential. We find good agreement between the numerics of the full problem and the analytics of the linear theory. Our work provides the theoretical foundations of circuit models...
Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques
Energy Technology Data Exchange (ETDEWEB)
Ramshaw, M. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-07-28
Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile them with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.
Survival of planets around shrinking stellar binaries
Muñoz, Diego J.; Lai, Dong
2015-01-01
The discovery of transiting circumbinary planets by the Kepler mission suggests that planets can form efficiently around binary stars. None of the stellar binaries currently known to host planets has a period shorter than 7 d, despite the large number of eclipsing binaries found in the Kepler target list with periods shorter than a few days. These compact binaries are believed to have evolved from wider orbits into their current configurations via the so-called Lidov–Kozai migration mechanism, in which gravitational perturbations from a distant tertiary companion induce large-amplitude eccentricity oscillations in the binary, followed by orbital decay and circularization due to tidal dissipation in the stars. Here we explore the orbital evolution of planets around binaries undergoing orbital decay by this mechanism. We show that planets may survive and become misaligned from their host binary, or may develop erratic behavior in eccentricity, resulting in their consumption by the stars or ejection from the system as the binary decays. Our results suggest that circumbinary planets around compact binaries could still exist, and we offer predictions as to what their orbital configurations should be like. PMID:26159412
Speech perception of noise with binary gains
DEFF Research Database (Denmark)
Wang, DeLiang; Kjems, Ulrik; Pedersen, Michael Syskind
2008-01-01
For a given mixture of speech and noise, an ideal binary time-frequency mask is constructed by comparing speech energy and noise energy within local time-frequency units. It is observed that listeners achieve nearly perfect speech recognition from gated noise with binary gains prescribed...... by the ideal binary mask. Only 16 filter channels and a frame rate of 100 Hz are sufficient for high intelligibility. The results show that, despite a dramatic reduction of speech information, a pattern of binary gains provides an adequate basis for speech perception....
Survival of planets around shrinking stellar binaries.
Muñoz, Diego J; Lai, Dong
2015-07-28
The discovery of transiting circumbinary planets by the Kepler mission suggests that planets can form efficiently around binary stars. None of the stellar binaries currently known to host planets has a period shorter than 7 d, despite the large number of eclipsing binaries found in the Kepler target list with periods shorter than a few days. These compact binaries are believed to have evolved from wider orbits into their current configurations via the so-called Lidov-Kozai migration mechanism, in which gravitational perturbations from a distant tertiary companion induce large-amplitude eccentricity oscillations in the binary, followed by orbital decay and circularization due to tidal dissipation in the stars. Here we explore the orbital evolution of planets around binaries undergoing orbital decay by this mechanism. We show that planets may survive and become misaligned from their host binary, or may develop erratic behavior in eccentricity, resulting in their consumption by the stars or ejection from the system as the binary decays. Our results suggest that circumbinary planets around compact binaries could still exist, and we offer predictions as to what their orbital configurations should be like.
Comments Regarding the Binary Power Law for Heterogeneity of Disease Incidence
The binary power law (BPL) has been successfully used to characterize heterogeneity (over dispersion or small-scale aggregation) of disease incidence for many plant pathosystems. With the BPL, the log of the observed variance is a linear function of the log of the theoretical variance for a binomial...
Energy Technology Data Exchange (ETDEWEB)
Corral B, J. R.
2015-07-01
Humans should avoid exposure to radiation, because the consequences are harmful to health. Although there are different emission sources of radiation, generated by medical devices they are usually of great interest, since people who attend hospitals are exposed in one way or another to ionizing radiation. Therefore, is important to conduct studies on radioactive levels that are generated in hospitals, as a result of the use of medical equipment. To determine levels of exposure speed of a radioactive facility there are different methods, including the radiation detector and computational method. This thesis uses the computational method. With the program MCNP5 was determined the speed of the radiation exposure in the radiotherapy room of Cancer Center of ABC Hospital in Mexico City. In the application of computational method, first the thicknesses of the shields were calculated, using variables as: 1) distance from the shield to the source; 2) desired weekly equivalent dose; 3) weekly total dose equivalent emitted by the equipment; 4) occupation and use factors. Once obtained thicknesses, we proceeded to model the bunker using the mentioned program. The program uses the Monte Carlo code to probabilistic ally determine the phenomena of interaction of radiation with the shield, which will be held during the X-ray emission from the linear accelerator. The results of computational analysis were compared with those obtained experimentally with the detection method, for which was required the use of a Geiger-Muller counter and the linear accelerator was programmed with an energy of 19 MV with 500 units monitor positioning the detector in the corresponding boundary. (Author)
Robust image transmission performed by SPIHT and turbo-codes
Directory of Open Access Journals (Sweden)
Lakhdar Moulay Abdelmounaim
2008-01-01
Full Text Available This work describes the method for providing robustness to errors from a binary symmetric channel for the SPIHT image compression. The source rate and channel rate are jointly optimized by a stream of fixed-size channel packets. Punctured turbo codes are used for the channel coding, providing stronger error protection than previously available codes. We use the most appropriate set of puncturing patterns that ensure the best source rate. The presented rate allocation scheme obtains all necessary information from the SPIHT encoder, without requiring image decompression.
TIDALLY INDUCED PULSATIONS IN KEPLER ECLIPSING BINARY KIC 3230227
Energy Technology Data Exchange (ETDEWEB)
Guo, Zhao; Gies, Douglas R. [Center for High Angular Resolution Astronomy and Department of Physics and Astronomy, Georgia State University, P.O. Box 5060, Atlanta, GA 30302-5060 (United States); Fuller, Jim, E-mail: guo@astro.gsu.edu, E-mail: gies@chara.gsu.edu, E-mail: jfuller@caltech.edu [TAPIR, Walter Burke Institute for Theoretical Physics, Mailcode 350-17, Caltech, Pasadena, CA 91125 (United States)
2017-01-01
KIC 3230227 is a short period (P ≈ 7.0 days) eclipsing binary with a very eccentric orbit ( e = 0.6). From combined analysis of radial velocities and Kepler light curves, this system is found to be composed of two A-type stars, with masses of M {sub 1} = 1.84 ± 0.18 M {sub ⊙}, M {sub 2} = 1.73 ± 0.17 M {sub ⊙} and radii of R {sub 1} = 2.01 ± 0.09 R {sub ⊙}, R {sub 2} = 1.68 ± 0.08 R {sub ⊙} for the primary and secondary, respectively. In addition to an eclipse, the binary light curve shows a brightening and dimming near periastron, making this a somewhat rare eclipsing heartbeat star system. After removing the binary light curve model, more than 10 pulsational frequencies are present in the Fourier spectrum of the residuals, and most of them are integer multiples of the orbital frequency. These pulsations are tidally driven, and both the amplitudes and phases are in agreement with predictions from linear tidal theory for l = 2, m = −2 prograde modes.
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Zhang, Li; Lin, Shu; Abdel-Ghaffar, Khaled; Ding, Zhi; Zhou, Bo
2010-01-01
This paper consists of three parts. The first part presents a large class of new binary quasi-cyclic (QC)-LDPC codes with girth of at least 6 whose parity-check matrices are constructed based on cyclic subgroups of finite fields. Experimental results show that the codes constructed perform well over the binary-input AWGN channel with iterative decoding using the sum-product algorithm (SPA). The second part analyzes the ranks of the p...
Locally orderless registration code
DEFF Research Database (Denmark)
2012-01-01
This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....
Indian Academy of Sciences (India)
sound quality is, in essence, obtained by accurate waveform coding and decoding of the audio signals. In addition, the coded audio information is protected against disc errors by the use of a Cross Interleaved Reed-Solomon Code (CIRC). Reed-. Solomon codes were discovered by Irving Reed and Gus Solomon in 1960.
A note on the control function approach with an instrumental variable and a binary outcome.
Tchetgen Tchetgen, Eric J
2014-12-01
Unobserved confounding is a well known threat to causal inference in non-experimental studies. The instrumental variable design can under certain conditions be used to recover an unbiased estimator of a treatment effect even if unobserved confounding cannot be ruled out with certainty. For continuous outcomes, two stage least squares is the most common instrumental variable estimator used in epidemiologic applications. For a rare binary outcome, an analogous linear-logistic two-stage procedure can be used. Alternatively, a control function approach is sometimes used which entails entering the residual from the first stage linear model as a covariate in a second stage logistic regression of the outcome on the treatment. Both strategies for binary response have previously formally been justified only for continuous exposure, which has impeded widespread use of the approach outside of this setting. In this note, we consider the important setting of binary exposure in the context of a binary outcome. We provide an alternative motivation for the control function approach which is appropriate for binary exposure, thus establishing simple conditions under which the approach may be used for instrumental variable estimation when the outcome is rare. In the proposed approach, the first stage regression involves a logistic model of the exposure conditional on the instrumental variable, and the second stage regression is a logistic regression of the outcome on the exposure adjusting for the first stage residual. In the event of a non-rare outcome, we recommend replacing the second stage logistic model with a risk ratio regression.
Galustov, G. G.; Voronin, V. V.
2017-05-01
The sequence generator generates a sequence of pseudorandom binary numbers using a linear-feedback shift register (LFSR). This block implements LFSR using a simple shift register generator (SSRG, or Fibonacci) configuration. In this article we introduce the concept of probabilistic binary element provides requirements, which ensure compliance with the criterion of "uniformity" in the implementation of the basic physical generators uniformly distributed random number sequences. Based on these studies, we obtained an analytic relation between the parameters of the binary sequence and parameters of a numerical sequence with the shift register output. The received analytical dependencies can help in evaluating the statistical characteristics of the processes in solving problems of statistical modeling. It is supposed that the formation of the binary sequence output from the binary probabilistic element is produced using a physical noise process. It is shown that the observed errors in statistical modeling using pseudo-random numbers do not occur if the model examines linear systems with constant parameters, but in case models of nonlinear systems, higher order moments can have a Gaussian distribution.
Binary Relations as a Foundation of Mathematics
Kuper, Jan; Barendsen, E.; Capretta, V.; Geuvers, H.; Niqui, M.
2007-01-01
We describe a theory for binary relations in the Zermelo-Fraenkel style. We choose for ZFCU, a variant of ZFC Set theory in which the Axiom of Foundation is replaced by an axiom allowing for non-wellfounded sets. The theory of binary relations is shown to be equi-consistent ZFCU by constructing a
Novel quantum inspired binary neural network algorithm
Indian Academy of Sciences (India)
In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically ...
The Evolution of Compact Binary Star Systems.
Postnov, Konstantin A; Yungelson, Lev R
2014-01-01
We review the formation and evolution of compact binary stars consisting of white dwarfs (WDs), neutron stars (NSs), and black holes (BHs). Mergings of compact-star binaries are expected to be the most important sources for forthcoming gravitational-wave (GW) astronomy. In the first part of the review, we discuss observational manifestations of close binaries with NS and/or BH components and their merger rate, crucial points in the formation and evolution of compact stars in binary systems, including the treatment of the natal kicks, which NSs and BHs acquire during the core collapse of massive stars and the common envelope phase of binary evolution, which are most relevant to the merging rates of NS-NS, NS-BH and BH-BH binaries. The second part of the review is devoted mainly to the formation and evolution of binary WDs and their observational manifestations, including their role as progenitors of cosmologically-important thermonuclear SN Ia. We also consider AM CVn-stars, which are thought to be the best verification binary GW sources for future low-frequency GW space interferometers.
The Evolution of Compact Binary Star Systems
Directory of Open Access Journals (Sweden)
Yungelson, Lev R.
2006-12-01
Full Text Available We review the formation and evolution of compact binary stars consisting of white dwarfs (WDs, neutron stars (NSs, and black holes (BHs. Binary NSs and BHs are thought to be the primary astrophysical sources of gravitational waves (GWs within the frequency band of ground-based detectors, while compact binaries of WDs are important sources of GWs at lower frequencies to be covered by space interferometers (LISA. Major uncertainties in the current understanding of properties of NSs and BHs most relevant to the GW studies are discussed, including the treatment of the natal kicks which compact stellar remnants acquire during the core collapse of massive stars and the common envelope phase of binary evolution. We discuss the coalescence rates of binary NSs and BHs and prospects for their detections, the formation and evolution of binary WDs and their observational manifestations. Special attention is given to AM CVn-stars -- compact binaries in which the Roche lobe is filled by another WD or a low-mass partially degenerate helium-star, as these stars are thought to be the best LISA verification binary GW sources.
The Evolution of Compact Binary Star Systems
Directory of Open Access Journals (Sweden)
Konstantin A. Postnov
2014-05-01
Full Text Available We review the formation and evolution of compact binary stars consisting of white dwarfs (WDs, neutron stars (NSs, and black holes (BHs. Mergings of compact-star binaries are expected to be the most important sources for forthcoming gravitational-wave (GW astronomy. In the first part of the review, we discuss observational manifestations of close binaries with NS and/or BH components and their merger rate, crucial points in the formation and evolution of compact stars in binary systems, including the treatment of the natal kicks, which NSs and BHs acquire during the core collapse of massive stars and the common envelope phase of binary evolution, which are most relevant to the merging rates of NS-NS, NS-BH and BH-BH binaries. The second part of the review is devoted mainly to the formation and evolution of binary WDs and their observational manifestations, including their role as progenitors of cosmologically-important thermonuclear SN Ia. We also consider AM CVn-stars, which are thought to be the best verification binary GW sources for future low-frequency GW space interferometers.
Microlensing Binaries with Candidate Brown Dwarf Companions
DEFF Research Database (Denmark)
Shin, I.-G; Han, C.; Gould, A.
2012-01-01
Brown dwarfs are important objects because they may provide a missing link between stars and planets, two populations that have dramatically different formation histories. In this paper, we present the candidate binaries with brown dwarf companions that are found by analyzing binary microlensing...... with well-covered light curves increases with new-generation searches....
Statistical properties of spectroscopic binary stars
Hogeveen, S.J.
1992-01-01
As part of a study of the mass-ratio distribution of spectroscopic binary stars, the statistical properties of the systems in the Eighth Catalogue of the Orbital Elements of Spectroscopic Binary Stars, compiled by Batten et al. (1989), are investigated. Histograms are presented of the
An Acidity Scale for Binary Oxides.
Smith, Derek W.
1987-01-01
Discusses the classification of binary oxides as acidic, basic, or amphoteric. Demonstrates how a numerical scale for acidity/basicity of binary oxides can be constructed using thermochemical data for oxoacid salts. Presents the calculations derived from the data that provide the numeric scale values. (TW)
Binary trees equipped with semivaluations | Pajoohesh ...
African Journals Online (AJOL)
Our interest in this lattice stems from its application to binary decision trees. Binary decision trees form a crucial tool for algorithmic time analysis. The lattice properties of Tn are studied and we show that every Tn has a sublattice isomorphic to Tn-1 and prove that Tn is generated by Tn-1. Also we show that the distance from ...
Adamson , Brian; Adjih , Cédric; Bilbao , Josu; Firoiu , Victor; Fitzek , Frank; Samah , Ghanem ,; Lochin , Emmanuel; Masucci , Antonia; Montpetit , Marie-Jose; Pedersen , Morten V.; Peralta , Goiuri; Roca , Vincent; Paresh , Saxena; Sivakumar , Senthil
2017-01-01
Internet Research Task Force - Working document of the Network Coding Research Group (NWCRG), draft-irtf-nwcrg-network-coding-taxonomy-05 (work in progress), https://datatracker.ietf.org/doc/draft-irtf-nwcrg-network-coding-taxonomy/; This document summarizes a recommended terminology for Network Coding concepts and constructs. It provides a comprehensive set of terms with unique names in order to avoid ambiguities in future Network Coding IRTF and IETF documents. This document is intended to ...
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…