The discrete dipole approximation code DDscat.C++: features, limitations and plans
Choliy, V. Ya.
2013-08-01
We present a new freely available open-source C++ software for numerical solution of the electromagnetic waves absorption and scattering problems within the Discrete Dipole Approximation paradigm. The code is based upon the famous and free Fortan-90 code DDSCAT by B. Draine and P. Flatau. Started as a teaching project, the presented code DDscat.C++ differs from the parent code DDSCAT with a number of features, essential for C++ but quite seldom in Fortran. This article introduces the new code, explains its features, presents timing information and some plans for further development.
User Guide for the Discrete Dipole Approximation Code DDSCAT 7.1
Draine, B T
2010-01-01
DDSCAT 7.1 is an open-source Fortran-90 software package applying the discrete dipole approximation to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The targets may be isolated entities (e.g., dust particles), but may also be 1-d or 2-d periodic arrays of "target unit cells", allowing calculation of absorption, scattering, and electric fields around arrays of nanostructures. The theory of the DDA and its implementation in DDSCAT is presented in Draine (1988) and Draine & Flatau (1994), and its extension to periodic structures (and near-field calculations) in Draine & Flatau (2008). DDSCAT 7.1 includes support for MPI, OpenMP, and the Intel Math Kernel Library (MKL). DDSCAT supports calculations for a variety of target geometries. Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to "import" arbitrary target geometries into the code. DDSCAT automatically calculates total cross ...
User Guide for the Discrete Dipole Approximation Code DDSCAT 7.0
Draine, B T
2008-01-01
DDSCAT 7.0 is an open-source Fortran-90 software package applying the discrete dipole approximation to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The targets may be isolated entities (e.g., dust particles), but may also be 1-d or 2-d periodic arrays of "target unit cells", allowing calculation of absorption, scattering, and electric fields around arrays of nanostructures. The theory of the DDA and its implementation in DDSCAT is presented in Draine (1988) and Draine & Flatau (1994), and its extension to periodic structures (and near-field calculations) in Draine & Flatau (2009). DDSCAT 7.0 includes support for MPI, OpenMP, and the Intel Math Kernel Library (MKL). DDSCAT supports calculations for a variety of target geometries. Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to "import" arbitrary target geometries into the code. DDSCAT automatically calculates total cross ...
User Guide for the Discrete Dipole Approximation Code DDSCAT (Version 5a10)
Draine, B T; Flatau, Piotr J.
2000-01-01
DDSCAT.5a is a freely available software package which applies the "discrete dipole approximation" (DDA) to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The DDA approximates the target by an array of polarizable points. DDSCAT.5a requires that these polarizable points be located on a cubic lattice. DDSCAT.5a10 allows accurate calculations of electromagnetic scattering from targets with "size parameters" 2 pi a/lambda < 15 provided the refractive index m is not large compared to unity (|m-1| < 1). The DDSCAT package is written in Fortran and is highly portable. The program supports calculations for a variety of target geometries (e.g., ellipsoids, regular tetrahedra, rectangular solids, finite cylinders, hexagonal prisms, etc.). Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to import arbitrary target geometries into the code, and relatively straightforward to add new target ...
User Guide for the Discrete Dipole Approximation Code DDSCAT 7.3
Draine, B T
2013-01-01
DDSCAT 7.3 is an open-source Fortran-90 software package applying the discrete dipole approximation to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The targets may be isolated entities (e.g., dust particles), but may also be 1-d or 2-d periodic arrays of "target unit cells", allowing calculation of absorption, scattering, and electric fields around arrays of nanostructures. The theory of the DDA and its implementation in DDSCAT is presented in Draine (1988) and Draine & Flatau (1994), and its extension to periodic structures in Draine & Flatau (2008), and efficient near-field calculations in Flatau & Draine (2012). DDSCAT 7.3 includes support for MPI, OpenMP, and the Intel Math Kernel Library (MKL). DDSCAT supports calculations for a variety of target geometries. Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to "import" arbitrary target geometries into the code. DDSCA...
User Guide for the Discrete Dipole Approximation Code DDSCAT.6.0
Draine, B T
2003-01-01
DDSCAT.6.0 is a freely available software package (http://www.astro.princeton.edu/~draine/DDSCAT.6.0.html) which applies the "discrete dipole approximation" (DDA) to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. DDSCAT.6.0 allows accurate calculations of electromagnetic scattering from targets with ``size parameters'' 2*pi*a/lambda < 15 provided the refractive index m is not large compared to unity (|m-1| < 1). DDSCAT.6.0 includes the option of using the FFTW (Fastest Fourier Transform in the West) package. DDSCAT.6.0 also includes MPI support, permitting parallel calculations on multiprocessor systems. DDSCAT package is written in Fortran and is highly portable. The program supports calculations for a variety of target geometries (e.g., ellipsoids, regular tetrahedra, rectangular solids, finite cylinders, hexagonal prisms, etc.). Target materials may be both inhomogeneous and anisotropic. It is straightforward for the use...
User Guide for the Discrete Dipole Approximation Code DDSCAT 6.1
Draine, B T; Draine, Bruce T.; Flatau, Piotr J.
2004-01-01
DDSCAT 6.1 is a software package which applies the discrete dipole approximation (DDA) to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. DDSCAT 6.1 allows accurate calculations of electromagnetic scattering from targets with size parameters 2 pi a_eff/lambda < 15 provided the refractive index m is not large compared to unity (|m-1| < 2). DDSCAT 6.1 includes support for MPI and FFTW. We also make available a "plain" distribution of DDSCAT 6.1 that does not include support for MPI, FFTW, or netCDF, but is much simpler to install than the full distribution. The DDSCAT package is written in Fortran and is highly portable. The program supports calculations for a variety of target geometries (e.g., ellipsoids, regular tetrahedra, rectangular solids, finite cylinders, hexagonal prisms, etc.). Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to import arbitrary target geometries into th...
User Guide for the Discrete Dipole Approximation Code DDSCAT 7.2
Draine, Bruce T
2012-01-01
DDSCAT 7.2 is a freely available open-source Fortran-90 software package applying the discrete dipole approximation (DDA) to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The targets may be isolated entities (e.g., dust particles), but may also be 1-d or 2-d periodic arrays of "target unit cells", which can be used to study absorption, scattering, and electric fields around arrays of nanostructures. The DDA approximates the target by an array of polarizable points. The theory of the DDA and its implementation in DDSCAT is presented in Draine (1988) and Draine & Flatau (1994), and its extension to periodic structures in Draine & Flatau (2008). Efficient near-field calculations are enabled following Flatau & Draine (2012). DDSCAT 7.2 allows accurate calculations of electromagnetic scattering from targets with size parameters 2*pi*aeff/lambda < 25 provided the refractive index m is not large compared to unity (|m-1| ...
DDscat.C++ User and programmer guide
Choliy, Vasyl
2014-01-01
DDscat.C++ 7.3.0 is a freely available open-source C++ software package applying the "discrete dipole approximation" (DDA) to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and a complex refractive index. DDscat.C++ is a clone of well known DDscat Fortran-90 software. We refer to DDscat as to the parent code in this document. Versions 7.3.0 of both codes have the identical functionality but the quite different implementation. Started as a teaching project, the DDscat.C++ code differs from the parent code DDscat in programming techniques and features, essential for C++ but quite seldom in Fortran. As DDscat.C++ in its current version is just a clone, usage of DDscat.C++ for electromagnetic calculations is the same as of DDscat. Please, refer to "User Guide for the Discrete Dipole Approximation Code DDSCAT 7.3" to start using the code(s). This document consists of two parts. In the first part we present Quick start guide for users who want to begin to use the c...
Low rank approximations for the DEPOSIT computer code
Litsarev, Mikhail; Oseledets, Ivan
2014-01-01
We present an efficient technique based on low-rank separated approximations for the computation of three-dimensional integrals in the computer code DEPOSIT that describes ion-atomic collision processes. Implementation of this technique decreases the total computational time by a factor of 1000. The general concept can be applied to more complicated models.
Approximate Quantum Error-Correcting Codes and Secret Sharing Schemes
Crepeau, Claude; Gottesman, Daniel; Smith, Adam
2005-01-01
It is a standard result in the theory of quantum error-correcting codes that no code of length n can fix more than n/4 arbitrary errors, regardless of the dimension of the coding and encoded Hilbert spaces. However, this bound only applies to codes which recover the message exactly. Naively, one might expect that correcting errors to very high fidelity would only allow small violations of this bound. This intuition is incorrect: in this paper we describe quantum error-correcting codes capable...
Quantum universal coding protocols and universal approximation of multi-copy states
We have constructed universal codes for quantum lossless source coding and classical-quantum channel coding. In this construction, we essentially employ group representation theory. In order to treat quantum lossless source coding, universal approximation of multi-copy states is discussed in terms of the quantum relative entropy.
Jégou, Hervé; Douze, Matthijs; Schmid, Cordelia
2009-01-01
We propose an approximate nearest neighbor search method based on quantization. It uses, in particular, product quantizer to produce short codes and corresponding distance estimators approximating the Euclidean distance between the orginal vectors. The method is advantageously used in an asymmetric manner, by computing the distance between a vector and code, unlike competing techniques such as spectral hashing that only compare codes. Our approach approximates the Euclidean distance based on ...
The open-source beam-splitting code is described which implements the geometric-optics approximation to light scattering by convex faceted particles. This code is written in C++ as a library which can be easy applied to a particular light scattering problem. The code uses only standard components, that makes it to be a cross-platform solution and provides its compatibility to popular Integrated Development Environments (IDE's). The included example of solving the light scattering by a randomly oriented ice crystal is written using Qt 5.1, consequently it is a cross-platform solution, too. Both physical and computational aspects of the beam-splitting algorithm are discussed. Computational speed of the beam-splitting code is obviously higher compared to the conventional ray-tracing codes. A comparison of the phase matrix as computed by our code with the ray-tracing code by A. Macke shows excellent agreement. - Highlights: • The beam-splitting code is presented as open-source software. • Both physical and computational aspects of the code are discussed. • Computational speed of the code is higher than ray-tracing codes. • A comparison with the ray-tracing Macke's code shows excellent agreement
Distributed Successive Approximation Coding using Broadcast Advantage: The Two-Encoder Case
Chen, Zichong; Vetterli, Martin
2010-01-01
Traditional distributed source coding rarely considers the possible link between separate encoders. However, the broadcast nature of wireless communication in sensor networks provides a free gossip mechanism which can be used to simplify encoding/decoding and reduce transmission power. Using this broadcast advantage, we present a new two-encoder scheme which imitates the ping-pong game and has a successive approximation structure. For the quadratic Gaussian case, we prove that this scheme is successively refinable on the {sum-rate, distortion pair} surface, which is characterized by the rate-distortion region of the distributed two-encoder source coding. A potential energy saving over conventional distributed coding is also illustrated. This ping-pong distributed coding idea can be extended to the multiple encoder case and provides the theoretical foundation for a new class of distributed image coding method in wireless scenarios.
DESIGN OF LDPC-CODED BICM USING A SEMI-GAUSSIAN APPROXIMATION
Huang Jie; Zhang Fan; Zhu Jinkang
2007-01-01
This paper investigates analysis and design of Low-Density Parity-Check (LDPC) coded BitInterleaved Coded Modulation (BICM) over Additive White Gaussian Noise (AWGN) channel. It focuses on Gray-labeled 8-ary Phase-Shift-Keying (8PSK) modulation and employs a Maximum A Posteriori (MAP) symbol-to-bit metric calculator at the receiver. An equivalent model of a BICM communication channel with ideal interleaving is presented. The probability distribution function of log-likelihood ratio messages from the MAP receiver can be approximated by a mixture of symmetric Gaussian densities. As a result semi-Gaussian approximation can be used to analyze the decoder.Extrinsic information transfer charts are employed to describe the convergence behavior of LDPC decoder. The design of irregular LDPC codes reduces to a linear programming problem on two-dimensional variable edge-degree distribution. This method allows irregular code design in a wider range of rates without any limit on the maximum node degree and can be used to design irregular codes having rates varying from 0.5275 to 0.9099. The designed convergence thresholds are only a few tenths,even a few hundredths of a decibel from the capacity limits. It is shown by Monte Carlo simulations that,when the block length is 30,000, these codes operate about 0.62-0.75 dB from the capacity limit at a bit error rate of 10-8.
New Density Evolution Approximation for LDPC and Multi-Edge Type LDPC Codes
Jayasooriya, Sachini; Shirvanimoghaddam, Mahyar; Ong, Lawrence; Lechner, Gottfried; Johnson, Sarah J.
2016-01-01
This paper considers density evolution for lowdensity parity-check (LDPC) and multi-edge type low-density parity-check (MET-LDPC) codes over the binary input additive white Gaussian noise channel. We first analyze three singleparameter Gaussian approximations for density evolution and discuss their accuracy under several conditions, namely at low rates, with punctured and degree-one variable nodes. We observe that the assumption of symmetric Gaussian distribution for the density-evolution mes...
BILAM: a composite laminate failure-analysis code using bilinear stress-strain approximations
McLaughlin, P.V. Jr.; Dasgupta, A.; Chun, Y.W.
1980-10-01
The BILAM code which uses constant strain laminate analysis to generate in-plane load/deformation or stress/strain history of composite laminates to the point of laminate failure is described. The program uses bilinear stress-strain curves to model layer stress-strain behavior. Composite laminates are used for flywheels. The use of this computer code will help to develop data on the behavior of fiber composite materials which can be used by flywheel designers. In this program the stress-strain curves are modelled by assuming linear response in axial tension while using bilinear approximations (2 linear segments) for stress-strain response to axial compressive, transverse tensile, transverse compressive and axial shear loadings. It should be noted that the program attempts to empirically simulate the effects of the phenomena which cause nonlinear stress-strain behavior, instead of mathematically modelling the micromechanics involved. This code, therefore, performs a bilinear laminate analysis, and, in conjunction with several user-defined failure interaction criteria, is designed to provide sequential information on all layer failures up to and including the first fiber failure. The modus operandi is described. Code BILAM can be used to: predict the load-deformation/stress-strain behavior of a composite laminate subjected to a given combination of in-plane loads, and make analytical predictions of laminate strength.
The neutron noise, induced by a rod manoeuvring experiment in a pressurized water reactor, has been calculated by the incore fuel management code SIMULATE. The space- and frequency-dependent noise in the thermal group was calculated through the adiabatic approximation in three dimensions and two-group theory, with the spatial resolution of the nodal model underlying the SIMULATE algorithm. The calculated spatial noise profiles were interpreted on physical terms. They were also compared with model calculations in a 2-D one-group model, where various approximations as well as the full space-dependent response could be calculated. The adiabatic results obtained with SIMULATE can be regarded as reliable for sub-plateau frequencies (below 0.1 Hz). (orig.)
The discrete-dipole-approximation code ADDA: Capabilities and known limitations
The open-source code ADDA is described, which implements the discrete dipole approximation (DDA), a method to simulate light scattering by finite 3D objects of arbitrary shape and composition. Besides standard sequential execution, ADDA can run on a multiprocessor distributed-memory system, parallelizing a single DDA calculation. Hence the size parameter of the scatterer is in principle limited only by total available memory and computational speed. ADDA is written in C99 and is highly portable. It provides full control over the scattering geometry (particle morphology and orientation, and incident beam) and allows one to calculate a wide variety of integral and angle-resolved scattering quantities (cross sections, the Mueller matrix, etc.). Moreover, ADDA incorporates a range of state-of-the-art DDA improvements, aimed at increasing the accuracy and computational speed of the method. We discuss both physical and computational aspects of the DDA simulations and provide a practical introduction into performing such simulations with the ADDA code. We also present several simulation results, in particular, for a sphere with size parameter 320 (100-wavelength diameter) and refractive index 1.05.
Approximate quantum error correction: Optimal codes for independent and correlated errors
The reversibility of open system dynamics in practice depends on a separation of probability regimes in which high-probability errors are corrected at the expense of leaving lower-probability errors uncorrected whenever these occur, i.e. correcting only errors on single qubits in a quantum code. However, several important quantum information processing scenarios are not describable by a neat separation of probability regimes, and we investigate codes for optimal information protection when this is the case. We use entanglement dynamics to compare and evaluate the performance of different codes and present optimal codes for full noisy quantum channels in terms of minimum deviation from perfect correctability. We present N-qubit inequalities governing optimal codes for different probability regimes of errors and give explicit examples of significant improvement for some standard cases.
Schmit, L. A.; Miura, H.
1975-01-01
The creation of an efficient automated capability for minimum weight design of structures is reported. The ACCESS 1 computer program combines finite element analysis techniques and mathematical programming algorithms using an innovative collection of approximation concepts. Design variable linking, constraint deletion techniques and approximate analysis methods are used to generate a sequence of small explicit mathematical programming problems which retain the essential features of the design problem. Organization of the finite element analysis is carefully matched to the design optimization task. The efficiency of the ACCESS 1 program is demonstrated by giving results for several example problems.
A spatially coupled transverse leakage approximation in 2-D and 3-D cartesian geometry is developed. A fundamental spatially coupled expansion generates a new set of unknowns, the cross terms, which may be determined by continuity conditions of flux in node vertex (2-D) or averaged flux in node edges (3-D). The fundamental expansion is compatible to a TIP Legendre approximation, and the transverse leakages are obtained as a by-product from the cross terms. Two benchmark problems show the continuity and accuracy of the solutions. (author)
Approximate information capacity of the perfect integrate-and-fire neuron using the temporal code
Košťál, Lubomír
2012-01-01
Roč. 1434, JAN 24 (2012), s. 136-141. ISSN 0006-8993. [International Workshop on Neural Coding. Limassol, 29.10.2010-03.11.2010] R&D Projects: GA MŠk(CZ) LC554; GA ČR(CZ) GAP103/11/0282 Institutional research plan: CEZ:AV0Z50110509 Keywords : integrate-and- fire neuron * information capacity Subject RIV: FH - Neurology Impact factor: 2.879, year: 2012
ACCESS-2: Approximation Concepts Code for Efficient Structural Synthesis, user's guide
Miura, H.; Schmit, L. A., Jr.
1978-01-01
A user's guide is presented for the ACCESS-2 computer program. ACCESS-2 is a research oriented program which implements a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and general mathematical programming algorithms are applied in the design optimization procedure.
Upgraded Approximation of Non-Binary Alphabets for Polar Code Construction
Ghayoori, Arash; Gulliver, T. Aaron
2013-01-01
An algorithm is presented for approximating a single-user channel with a prime input alphabet size. The result is an upgraded version of the channel with a reduced output alphabet size. It is shown that this algorithm can be used to reduce the output alphabet size to the input alphabet size in most cases.
ACCESS 3. Approximation concepts code for efficient structural synthesis: User's guide
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
A user's guide is presented for ACCESS-3, a research oriented program which combines dual methods and a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and dual algorithms of mathematical programming are applied in the design optimization procedure. This program retains all of the ACCESS-2 capabilities and the data preparation formats are fully compatible. Four distinct optimizer options were added: interior point penalty function method (NEWSUMT); second order primal projection method (PRIMAL2); second order Newton-type dual method (DUAL2); and first order gradient projection-type dual method (DUAL1). A pure discrete and mixed continuous-discrete design variable capability, and zero order approximation of the stress constraints are also included.
Miura, H.; Schmit, L. A., Jr.
1976-01-01
The program documentation and user's guide for the ACCESS-1 computer program is presented. ACCESS-1 is a research oriented program which implements a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and general mathematical programming algorithms are applied in the design optimization procedure. Implementation of the computer program, preparation of input data and basic program structure are described, and three illustrative examples are given.
A computer code for beam optics calculation--third order approximation
L(U) Jianqin; LI Jinhai
2006-01-01
To calculate the beam transport in the ion optical systems accurately, a beam dynamics computer program of third order approximation is developed. Many conventional optical elements are incorporated in the program. Particle distributions of uniform type or Gaussian type in the ( x, y, z ) 3D ellipses can be selected by the users. The optimization procedures are provided to make the calculations reasonable and fast. The calculated results can be graphically displayed on the computer monitor.
Irradiation Experimental Area of TechnoFusion will emulate the extreme irradiation fusion conditions in materials by means of three ion accelerators: one used for self-implanting heavy ions (Fe, Si, C,...) to emulate the displacement damage induced by fusion neutrons and the other two for light ions (H and He) to emulate the transmutation induced by fusion neutrons. This Laboratory will play an essential role in the selection of functional materials for DEMO reactor since it will allow reproducing the effects of neutron radiation on fusion materials. Ion irradiation produces little or no residual radioactivity, allowing handling of samples without the need for special precautions. Currently, two different methods are used to calculate the primary displacement damage by neutron irradiation or by ion irradiation. On one hand, the displacement damage doses induced by neutrons are calculated considering the NRT model based on the electronic screening theory of Linhard. This methodology is commonly used since 1975. On the other hand, for experimental research community the SRIM code is commonly used to calculate the primary displacement damage dose induced by ion irradiation. Therefore, both methodologies of primary displacement damage calculation have nothing in common. However, if we want to design ion irradiation experiments capable to emulate the neutron fusion effect in materials, it is necessary to develop comparable methodologies of damage calculation for both kinds of radiation. It would allow us to define better the ion irradiation parameters (Ion, current, Ion energy, dose, etc) required to emulate a specific neutron irradiation environment. Therefore, our main objective was to find the way to calculate the primary displacement damage induced by neutron irradiation and by ion irradiation starting from the same point, that is, the PKA spectrum. In order to emulate the neutron irradiation that would prevail under fusion conditions, two approaches are contemplated: a) on
Recklessly Approximate Sparse Coding
Denil, Misha; De Freitas, Nando
2012-01-01
It has recently been observed that certain extremely simple feature encoding techniques are able to achieve state of the art performance on several standard image classification benchmarks including deep belief networks, convolutional nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders and several others. Moreover, these "triangle" or "soft threshold" encodings are ex- tremely efficient to compute. Several intuitive arguments have been put forward to explain this remarkable p...
B. Scarnato
2012-10-01
Full Text Available According to recent studies, internal mixing of black carbon (BC with other aerosol materials in the atmosphere alters its aggregate shape, absorption of solar radiation, and radiative forcing. These mixing state effects are not yet fully understood. In this study, we characterize the morphology and mixing state of bare BC and BC internally mixed with sodium chloride (NaCl using electron microscopy and examine the sensitivity of optical properties to BC mixing state and aggregate morphology using a discrete dipole approximation model (DDSCAT. DDSCAT predicts a higher mass absorption coefficient, lower single scattering albedo (SSA, and higher absorption Angstrom exponent (AAE for bare BC aggregates that are lacy rather than compact. Predicted values of SSA at 550 nm range between 0.18 and 0.27 for lacy and compact aggregates, respectively, in agreement with reported experimental values of 0.25 ± 0.05. The variation in absorption with wavelength does not adhere precisely to a power law relationship over the 200 to 1000 nm range. Consequently, AAE values depend on the wavelength region over which they are computed. In the 300 to 550 nm range, AAE values ranged in this study from 0.70 for compact to 0.95 for lacy aggregates. The SSA of BC internally mixed with NaCl (100–300 nm in radius is higher than for bare BC and increases with the embedding in the NaCl. Internally mixed BC SSA values decrease in the 200–400 nm wavelength range, a feature also common to the optical properties of dust and organics. Linear polarization features are also predicted in DDSCAT and are dependent on particle morphology. The bare BC (with a radius of 80 nm presents in the linear polarization a bell shape feature, which is a characteristic of the Rayleigh regime (for particles smaller than the wavelength of incident radiation. When BC is internally mixed with NaCl (100–300 nm in radius, strong depolarization features for near-VIS incident radiation are evident
B. V. Scarnato
2013-05-01
Full Text Available According to recent studies, internal mixing of black carbon (BC with other aerosol materials in the atmosphere alters its aggregate shape, absorption of solar radiation, and radiative forcing. These mixing state effects are not yet fully understood. In this study, we characterize the morphology and mixing state of bare BC and BC internally mixed with sodium chloride (NaCl using electron microscopy and examine the sensitivity of optical properties to BC mixing state and aggregate morphology using a discrete dipole approximation model (DDSCAT. DDSCAT is flexible in simulating the geometry and refractive index of particle aggregates. DDSCAT predicts a higher mass absorption coefficient (MAC, lower single scattering albedo (SSA, and higher absorption Angstrom exponent (AAE for bare BC aggregates that are lacy rather than compact. Predicted values of SSA at 550 nm range between 0.16 and 0.27 for lacy and compact aggregates, respectively, in agreement with reported experimental values of 0.25 ± 0.05. The variation in absorption with wavelength does not adhere precisely to a power law relationship over the 200 to 1000 nm range. Consequently, AAE values depend on the wavelength region over which they are computed. The MAC of BC (averaged over the 200–1000 nm range is amplified when internally mixed with NaCl (100–300 nm in radius by factors ranging from 1.0 for lacy BC aggregates partially immersed in NaCl to 2.2 for compact BC aggregates fully immersed in NaCl. The SSA of BC internally mixed with NaCl is higher than for bare BC and increases with the embedding in the NaCl. Internally mixed BC SSA values decrease in the 200–400 nm wavelength range, a feature also common to the optical properties of dust and organics. Linear polarization features are also predicted in DDSCAT and are dependent on particle size and morphology. This study shows that DDSCAT predicts complex morphology and mixing state dependent aerosol optical properties that have
We compute the absorption efficiency (Qabs) of forsterite using the discrete dipole approximation in order to identify and describe what characteristics of crystal grain shape and size are important to the shape, peak location, and relative strength of spectral features in the 8-40 μm wavelength range. Using the DDSCAT code, we compute Qabs for non-spherical polyhedral grain shapes with aeff = 0.1 μm. The shape characteristics identified are (1) elongation/reduction along one of three crystallographic axes; (2) asymmetry, such that all three crystallographic axes are of different lengths; and (3) the presence of crystalline faces that are not parallel to a specific crystallographic axis, e.g., non-rectangular prisms and (di)pyramids. Elongation/reduction dominates the locations and shapes of spectral features near 10, 11, 16, 23.5, 27, and 33.5 μm, while asymmetry and tips are secondary shape effects. Increasing grain sizes (0.1-1.0 μm) shifts the 10 and 11 μm features systematically toward longer wavelengths and relative to the 11 μm feature increases the strengths and slightly broadens the longer wavelength features. Seven spectral shape classes are established for crystallographic a-, b-, and c-axes and include columnar and platelet shapes plus non-elongated or equant grain shapes. The spectral shape classes and the effects of grain size have practical application in identifying or excluding columnar, platelet, or equant forsterite grain shapes in astrophysical environs. Identification of the shape characteristics of forsterite from 8 to 40 μm spectra provides a potential means to probe the temperatures at which forsterite formed.
Aschwanden, Markus J
2016-01-01
In this work we provide an updated description of the Vertical Current Approximation Nonlinear Force-Free Field (VCA-NLFFF) code, which is designed to measure the evolution of the potential, nonpotential, free energies, and the dissipated magnetic energies during solar flares. This code provides a complementary and alternative method to existing traditional NLFFF codes. The chief advantages of the VCA-NLFFF code over traditional NLFFF codes are the circumvention of the unrealistic assumption of a force-free photosphere in the magnetic field extrapolation method, the capability to minimize the misalignment angles between observed coronal loops (or chromospheric fibril structures) and theoretical model field lines, as well as computational speed. In performance tests of the VCA-NLFFF code, by comparing with the NLFFF code of Wiegelmann (2004), we find agreement in the potential, nonpotential, and free energy within a factor of about 1.3, but the Wiegelmann code yields in the average a factor of 2 lower flare en...
Daura-Oller, Elias; Cabré, Maria; Montero, Miguel A.; Paternáin, José L.; Romeu, Antoni
2009-01-01
In the present study, a positive training set of 30 known human imprinted gene coding regions are compared with a set of 72 randomly sampled human nonimprinted gene coding regions (negative training set) to identify genomic features common to human imprinted genes. The most important feature of the present work is its ability to use multivariate analysis to look at variation, at coding region DNA level, among imprinted and non-imprinted genes. There is a force affecting genomic parameters that appears through the use of the appropriate multivariate methods (principle components analysis (PCA) and quadratic discriminant analysis (QDA)) to analyse quantitative genomic data. We show that variables, such as CG content, [bp]% CpG islands, [bp]% Large Tandem Repeats, and [bp]% Simple Repeats, are able to distinguish coding regions of human imprinted genes. PMID:19360135
Elias Daura-Oller
2009-01-01
Full Text Available In the present study, a positive training set of 30 known human imprinted gene coding regions are compared with a set of 72 randomly sampled human nonimprinted gene coding regions (negative training set to identify genomic features common to human imprinted genes. The most important feature of the present work is its ability to use multivariate analysis to look at variation, at coding region DNA level, among imprinted and non-imprinted genes. There is a force affecting genomic parameters that appears through the use of the appropriate multivariate methods (principle components analysis (PCA and quadratic discriminant analysis (QDA to analyse quantitative genomic data. We show that variables, such as CG content, [bp]% CpG islands, [bp]% Large Tandem Repeats, and [bp]% Simple Repeats, are able to distinguish coding regions of human imprinted genes.
Barbier, Jean
2015-01-01
This thesis is interested in the application of statistical physics methods and inference to sparse linear estimation problems. The main tools are the graphical models and approximate message-passing algorithm together with the cavity method. We will also use the replica method of statistical physics of disordered systems which allows to associate to the studied problems a cost function referred as the potential of free entropy in physics. It allows to predict the different phases of typical ...
The computer code block VENTURE, designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P1) in up to three-dimensional geometry is described. A variety of types of problems may be solved: the usual eigenvalue problem, a direct criticality search on the buckling, on a reciprocal velocity absorber (prompt mode), or on nuclide concentrations, or an indirect criticality search on nuclide concentrations, or on dimensions. First-order perturbation analysis capability is available at the macroscopic cross section level
The report documents the computer code block VENTURE designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P1) in up to three-dimensional geometry. It uses and generates interface data files adopted in the cooperative effort sponsored by the Reactor Physics Branch of the Division of Reactor Research and Development of the Energy Research and Development Administration. Several different data handling procedures have been incorporated to provide considerable flexibility; it is possible to solve a wide variety of problems on a variety of computer configurations relatively efficiently
Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.
1977-11-01
The report documents the computer code block VENTURE designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P/sub 1/) in up to three-dimensional geometry. It uses and generates interface data files adopted in the cooperative effort sponsored by the Reactor Physics Branch of the Division of Reactor Research and Development of the Energy Research and Development Administration. Several different data handling procedures have been incorporated to provide considerable flexibility; it is possible to solve a wide variety of problems on a variety of computer configurations relatively efficiently.
Engel, D.; Klews, M.; Wunner, G.
2009-02-01
We have developed a new method for the fast computation of wavelengths and oscillator strengths for medium-Z atoms and ions, up to iron, at neutron star magnetic field strengths. The method is a parallelized Hartree-Fock approach in adiabatic approximation based on finite-element and B-spline techniques. It turns out that typically 15-20 finite elements are sufficient to calculate energies to within a relative accuracy of 10-5 in 4 or 5 iteration steps using B-splines of 6th order, with parallelization speed-ups of 20 on a 26-processor machine. Results have been obtained for the energies of the ground states and excited levels and for the transition strengths of astrophysically relevant atoms and ions in the range Z=2…26 in different ionization stages. Catalogue identifier: AECC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3845 No. of bytes in distributed program, including test data, etc.: 27 989 Distribution format: tar.gz Programming language: MPI/Fortran 95 and Python Computer: Cluster of 1-26 HP Compaq dc5750 Operating system: Fedora 7 Has the code been vectorised or parallelized?: Yes RAM: 1 GByte Classification: 2.1 External routines: MPI/GFortran, LAPACK, PyLab/Matplotlib Nature of problem: Calculations of synthetic spectra [1] of strongly magnetized neutron stars are bedevilled by the lack of data for atoms in intense magnetic fields. While the behaviour of hydrogen and helium has been investigated in detail (see, e.g., [2]), complete and reliable data for heavier elements, in particular iron, are still missing. Since neutron stars are formed by the collapse of the iron cores of massive stars, it may be assumed that their atmospheres contain an iron plasma. Our objective is to fill the gap
Clinical coding. Code breakers.
Mathieson, Steve
2005-02-24
--The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716
Niven, Ivan
2008-01-01
This self-contained treatment originated as a series of lectures delivered to the Mathematical Association of America. It covers basic results on homogeneous approximation of real numbers; the analogue for complex numbers; basic results for nonhomogeneous approximation in the real case; the analogue for complex numbers; and fundamental properties of the multiples of an irrational number, for both the fractional and integral parts.The author refrains from the use of continuous fractions and includes basic results in the complex case, a feature often neglected in favor of the real number discuss
Approximate Representations and Approximate Homomorphisms
Moore, Cristopher; Russell, Alexander
2010-01-01
Approximate algebraic structures play a defining role in arithmetic combinatorics and have found remarkable applications to basic questions in number theory and pseudorandomness. Here we study approximate representations of finite groups: functions f:G -> U_d such that Pr[f(xy) = f(x) f(y)] is large, or more generally Exp_{x,y} ||f(xy) - f(x)f(y)||^2$ is small, where x and y are uniformly random elements of the group G and U_d denotes the unitary group of degree d. We bound these quantities i...
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Schmidt, Wolfgang M
1980-01-01
"In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)
The computer code, POD, was developed to calculate angle-differential cross sections and analyzing powers for shape-elastic scattering for collisions of neutron or light ions with target nucleus. The cross sections are computed with the optical model. Angle-differential cross sections for neutron inelastic scattering can also be calculated with the distorted-wave Born approximation. The optical model potential parameters are the most essential inputs for those model computations. In this program, the cross sections and analyzing powers are obtained by using the existing local or global parameters. The parameters can also be inputted by users. In this report, the theoretical formulas, the computational methods, and the input parameters are explained. The sample inputs and outputs are also presented. (author)
A new approach to describing neutron spectra of deuteron-induced reactions in the Monte Carlo simulation for particle transport has been developed by combining the Intra-Nuclear Cascade of Liège (INCL) and the Distorted Wave Born Approximation (DWBA) calculation. We incorporated this combined method into the Particle and Heavy Ion Transport code System (PHITS) and applied it to estimate (d,xn) spectra on natLi, 9Be, and natC targets at incident energies ranging from 10 to 40 MeV. Double differential cross sections obtained by INCL and DWBA successfully reproduced broad peaks and discrete peaks, respectively, at the same energies as those observed in experimental data. Furthermore, an excellent agreement was observed between experimental data and PHITS-derived results using the combined method in thick target neutron yields over a wide range of neutron emission angles in the reactions. We also applied the new method to estimate (d,xp) spectra in the reactions, and discussed the validity for the proton emission spectra
Chalasani, P.; Saias, I. [Los Alamos National Lab., NM (United States); Jha, S. [Carnegie Mellon Univ., Pittsburgh, PA (United States)
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Diophantine approximation and badly approximable sets
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X....... The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...
The brief description of the AZIMUT code for calculation the neutron flux in a cluster cell is presented. Code takes into account 1 and 2 azimuthal harmonics in the one-group P3-approximation and uses the heterogeneous approach. 2 refs
The Procions` code; Le code Procions
Deck, D.; Samba, G.
1994-12-19
This paper presents a new code to simulate plasmas generated by inertial confinement. This multi-kinds kinetic code is done with no angular approximation concerning ions and will work in plan and spherical geometry. First, the physical model is presented, using Fokker-Plank. Then, the numerical model is introduced in order to solve the Fokker-Plank operator under the Rosenbluth form. At the end, several numerical tests are proposed. (TEC). 17 refs., 27 figs.
Bin Qin
2014-01-01
Relationships between fuzzy relations and fuzzy topologies are deeply researched. The concept of fuzzy approximating spaces is introduced and decision conditions that a fuzzy topological space is a fuzzy approximating space are obtained.
Stochastic approximation: invited paper
Lai, Tze Leung
2003-01-01
Stochastic approximation, introduced by Robbins and Monro in 1951, has become an important and vibrant subject in optimization, control and signal processing. This paper reviews Robbins' contributions to stochastic approximation and gives an overview of several related developments.
Rasin, A
1994-01-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Approximation of distributed delays
Lu, Hao; Eberard, Damien; Simon, Jean-Pierre
2010-01-01
We address in this paper the approximation problem of distributed delays. Such elements are convolution operators with kernel having bounded support, and appear in the control of time-delay systems. From the rich literature on this topic, we propose a general methodology to achieve such an approximation. For this, we enclose the approximation problem in the graph topology, and work with the norm defined over the convolution Banach algebra. The class of rational approximates is described, and a constructive approximation is proposed. Analysis in time and frequency domains is provided. This methodology is illustrated on the stabilization control problem, for which simulations results show the effectiveness of the proposed methodology.
Sparse approximation with bases
2015-01-01
This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications. The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...
Ericson, Thomas
1993-01-01
Slepians permutation codes are investigated in detail. In particular we optimize the initial vector and derive all dominating codes in dimension n 2 6. With the exception of the simplex and biorthogonal codes - which are always included as special cases of permutation codes - there are probably no further good codes in higher dimensions.
YueShihong; ZhangKecun
2002-01-01
In a dot product space with the reproducing kernel (r. k. S. ) ,a fuzzy system with the estimation approximation errors is proposed ,which overcomes the defect that the existing fuzzy control system is difficult to estimate the errors of approximation for a desired function,and keeps the characteristics of fuzzy system as an inference approach. The structure of the new fuzzy approximator benefits a course got by other means.
Malvina Baica
1985-01-01
The author uses a new modification of Jacobi-Perron Algorithm which holds for complex fields of any degree (abbr. ACF), and defines it as Generalized Euclidean Algorithm (abbr. GEA) to approximate irrationals.This paper deals with approximation of irrationals of degree n=2,3,5. Though approximations of these irrationals in a variety of patterns are known, the results are new and practical, since there is used an algorithmic method.
Expectation Consistent Approximate Inference
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability distributions which are made consistent on a set of moments and encode different features of the original intractable distribution. In this way we are able to use Gaussian approximations for models with ...
Approximation techniques for engineers
Komzsik, Louis
2006-01-01
Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.
Expectation Consistent Approximate Inference
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...
Ordered cones and approximation
Keimel, Klaus
1992-01-01
This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.
Approximate Modified Policy Iteration
Scherrer, Bruno; Ghavamzadeh, Mohammad; Geist, Matthieu
2012-01-01
Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three approximate MPI (AMPI) algorithms that are extensions of the well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide an error propagation analysis for AMPI that unifies those for approximate policy and value iteration. We also provide a finite-sample analysis for the classification-based implementation of AMPI (CBMPI), which is more general (and somehow contains) than the analysis of the other presented AMPI algorithms. An interesting observation is that the MPI's parameter allows us to control the balance of errors (in value function approximation and in estimating the greedy policy) in the fina...
Approximations to toroidal harmonics
Toroidal harmonics P/sub n-1/2/1(cosh μ) and Q/sub n-1/2/1(cosh μ) are useful in solutions to Maxwell's equations in toroidal coordinates. In order to speed their computation, a set of approximations has been developed that is valid over the range 0 -10. The simple method used to determine the approximations is described. Relative error curves are also presented, obtained by comparing approximations to the more accurate values computed by direct summation of the hypergeometric series
Approximations in Inspection Planning
Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.; Bloch, Allan
2000-01-01
. One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found by the......Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations...... inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....
The Karlqvist approximation revisited
Tannous, C.
2015-01-01
The Karlqvist approximation signaling the historical beginning of magnetic recording head theory is reviewed and compared to various approaches progressing from Green, Fourier, Conformal mapping that obeys the Sommerfeld edge condition at angular points and leads to exact results.
Approximation Behooves Calibration
da Silva Ribeiro, André Manuel; Poulsen, Rolf
2013-01-01
Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
Diophantine approximations on fractals
Einsiedler, Manfred; Shapira, Uri
2009-01-01
We exploit dynamical properties of diagonal actions to derive results in Diophantine approximations. In particular, we prove that the continued fraction expansion of almost any point on the middle third Cantor set (with respect to the natural measure) contains all finite patterns (hence is well approximable). Similarly, we show that for a variety of fractals in [0,1]^2, possessing some symmetry, almost any point is not Dirichlet improvable (hence is well approximable) and has property C (after Cassels). We then settle by similar methods a conjecture of M. Boshernitzan saying that there are no irrational numbers x in the unit interval such that the continued fraction expansions of {nx mod1 : n is a natural number} are uniformly eventually bounded.
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Accuracy of Approximate Eigenstates
Lucha, Wolfgang; Lucha, Wolfgang
2000-01-01
Besides perturbation theory, which requires, of course, the knowledge of the exact unperturbed solution, variational techniques represent the main tool for any investigation of the eigenvalue problem of some semibounded operator H in quantum theory. For a reasonable choice of the employed trial subspace of the domain of H, the lowest eigenvalues of H usually can be located with acceptable precision whereas the trial-subspace vectors corresponding to these eigenvalues approximate, in general, the exact eigenstates of H with much less accuracy. Accordingly, various measures for the accuracy of the approximate eigenstates derived by variational techniques are scrutinized. In particular, the matrix elements of the commutator of the operator H and (suitably chosen) different operators, with respect to degenerate approximate eigenstates of H obtained by some variational method, are proposed here as new criteria for the accuracy of variational eigenstates. These considerations are applied to that Hamiltonian the eig...
Synthesis of approximation errors
Bareiss, E.H.; Michel, P.
1977-07-01
A method is developed for the synthesis of the error in approximations in the large of regular and irregular functions. The synthesis uses a small class of dimensionless elementary error functions which are weighted by the coefficients of the expansion of the regular part of the function. The question is answered whether a computer can determine the analytical nature of a solution by numerical methods. It is shown that continuous least-squares approximations of irregular functions can be replaced by discrete least-squares approximation and how to select the discrete points. The elementary error functions are used to show how the classical convergence criterions can be markedly improved. There are eight numerical examples included, 30 figures and 74 tables.
Towards a Unified Framework for Approximate Quantum Error Correction
Mandayam, Prabha
2012-01-01
Operator quantum error correction extends the standard formalism of quantum error correction (QEC) to codes in which only a subsystem within a subspace of states is used to store information in a noise-resilient fashion. Motivated by recent work on approximate QEC, which has opened up the possibility of constructing subspace codes beyond the framework of perfect error correction, we investigate the problem of {\\it approximate} operator quantum error correction (AOQEC). We demonstrate easily checkable sufficient conditions for the existence of AOQEC codes. Furthermore, for certain classes of noise processes, we prove the efficacy of the transpose channel as a simple-to-construct recovery map that works nearly as well as the optimal recovery channel, with optimality defined in terms of worst-case fidelity over all code states. This work generalizes our earlier approach \\cite{aqecPRA} of using the transpose channel for approximate correction of subspace codes to the case of subsystem codes, and brings us closer ...
White, Martin
2014-01-01
This year marks the 100th anniversary of the birth of Yakov Zel'dovich. Amongst his many legacies is the Zel'dovich approximation for the growth of large-scale structure, which remains one of the most successful and insightful analytic models of structure formation. We use the Zel'dovich approximation to compute the two-point function of the matter and biased tracers, and compare to the results of N-body simulations and other Lagrangian perturbation theories. We show that Lagrangian perturbation theories converge well and that the Zel'dovich approximation provides a good fit to the N-body results except for the quadrupole moment of the halo correlation function. We extend the calculation of halo bias to 3rd order and also consider non-local biasing schemes, none of which remove the discrepancy. We argue that a part of the discrepancy owes to an incorrect prediction of inter-halo velocity correlations. We use the Zel'dovich approximation to compute the ingredients of the Gaussian streaming model and show that ...
Prestack wavefield approximations
Alkhalifah, Tariq
2013-09-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
Madsen, Rasmus Elsborg
2005-01-01
The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM that...
Gersho, Allen
1990-05-01
Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.
Ishac Bertran
2012-08-01
Full Text Available "Exploring the potential of code to communicate at the level of poetry," the code {poems} project solicited submissions from codewriters in response to the notion of a poem, written in a software language which is semantically valid. These selections reveal the inner workings, constitutive elements, and styles of both a particular software and its authors.
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2011-01-01
Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.
Richtárik, Peter
2008-01-01
In this paper we propose and analyze a variant of the level method [4], which is an algorithm for minimizing nonsmooth convex functions. The main work per iteration is spent on 1) minimizing a piecewise-linear model of the objective function and on 2) projecting onto the intersection of the feasible region and a polyhedron arising as a level set of the model. We show that by replacing exact computations in both cases by approximate computations, in relative scale, the theoretical ...
Approximate Bayesian recursive estimation
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111. ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Local approximate inference algorithms
Jung, Kyomin; Shah, Devavrat
2006-01-01
We present a new local approximation algorithm for computing Maximum a Posteriori (MAP) and log-partition function for arbitrary exponential family distribution represented by a finite-valued pair-wise Markov random field (MRF), say $G$. Our algorithm is based on decomposition of $G$ into {\\em appropriately} chosen small components; then computing estimates locally in each of these components and then producing a {\\em good} global solution. We show that if the underlying graph $G$ either excl...
Fragments of approximate counting
Buss, S.R.; Kolodziejczyk, L.. A.; Thapen, Neil
2014-01-01
Roč. 79, č. 2 (2014), s. 496-525. ISSN 0022-4812 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : approximate counting * bounded arithmetic * ordering principle Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=9287274&fileId=S0022481213000376
Highlights: • Development of optimization rules for S2 quadrature sets. • Studying the dependency of optimized S2 quadratures on composition and geometry. • Demonstrating S2 procedures preserving the features of higher approximations. - Abstract: Discrete ordinates method relies on approximating the integral term of the transport equation with the aid of quadrature summation rules. These quadratures are usually based on certain assumptions which assure specific symmetry rules and transport/diffusion limits. Generally, these assumptions are not problem-dependent which results in inaccuracies in some instances. Here, various methods have been developed for more accurate estimation of the independent angle in S2 approximation, as it is tightly related to valid estimation of the diffusion coefficient/length. We proposed and examined a method to reduce a complicated problem that usually is consisting many energy groups and discrete directions (SN) to an equivalent one-group S2 problem while it mostly preserves general features of the original model. Some numerical results are demonstrated to show the accuracy of proposed method
A simple approach employing properties of solutions of differential equations is adopted to derive an appropriate extension of the WKBJ method. Some of the earlier techniques that are commonly in use are unified, whereby the general approximate solution to a second-order homogeneous linear differential equation is presented in a standard form that is valid for all orders. In comparison to other methods, the present one is shown to be leading in the order of iteration, and thus possibly has the ability of accelerating the convergence of the solution. The method is also extended for the solution of inhomogeneous equations. (author)
Approximation by Cylinder Surfaces
Randrup, Thomas
1997-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...... projection of the surface onto this plane, a reference curve is determined by use of methods for thinning of binary images. Finally, the cylinder surface is constructed as follows: the directrix of the cylinder surface is determined by a least squares method minimizing the distance to the points in the...
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Approximations to Euler's constant
We study a problem of finding good approximations to Euler's constant γ=lim→∞ Sn, where Sn = Σk=Ln (1)/k-log(n+1), by linear forms in logarithms and harmonic numbers. In 1995, C. Elsner showed that slow convergence of the sequence Sn can be significantly improved if Sn is replaced by linear combinations of Sn with integer coefficients. In this paper, considering more general linear transformations of the sequence Sn we establish new accelerating convergence formulae for γ. Our estimates sharpen and generalize recent Elsner's, Rivoal's and author's results. (author)
Chen, Dan
2012-01-01
We consider the problem of approximating the majority depth (Liu and Singh, 1993) of a point q with respect to an n-point set, S, by random sampling. At the heart of this problem is a data structures question: How can we preprocess a set of n lines so that we can quickly test whether a randomly selected vertex in the arrangement of these lines is above or below the median level. We describe a Monte-Carlo data structure for this problem that can be constructed in O(nlog n$ time, can answer queries O((log n)^{4/3}) expected time, and answers correctly with high probability.
The Compact Approximation Property does not imply the Approximation Property
Willis, George A.
1992-01-01
It is shown how to construct, given a Banach space which does not have the approximation property, another Banach space which does not have the approximation property but which does have the compact approximation property.
Cox, Geoff
Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...
This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors)
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2012-05-01
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.
Interacting boson approximation
Lectures notes on the Interacting Boson Approximation are given. Topics include: angular momentum tensors; properties of T/sub i//sup (n)/ matrices; T/sub i//sup (n)/ matrices as Clebsch-Gordan coefficients; construction of higher rank tensors; normalization: trace of products of two s-rank tensors; completeness relation; algebra of U(N); eigenvalue of the quadratic Casimir operator for U(3); general result for U(N); angular momentum content of U(3) representation; p-Boson model; Hamiltonian; quadrupole transitions; S,P Boson model; expectation value of dipole operator; S-D model: U(6); quadratic Casimir operator; an O(5) subgroup; an O(6) subgroup; properties of O(5) representations; quadratic Casimir operator; quadratic Casimir operator for U(6); decomposition via SU(5) chain; a special O(3) decomposition of SU(3); useful identities; a useful property of D/sub αβγ/(α,β,γ = 4-8) as coupling coefficients; explicit construction of T/sub x//sup (2)/ and d/sub αβγ/; D-coefficients; eigenstates of T3; and summary of T = 2 states
Operators of Approximations and Approximate Power Set Spaces
ZHANG Xian-yong; MO Zhi-wen; SHU Lan
2004-01-01
Boundary inner and outer operators are introduced; and union, intersection, complement operators of approximations are redefined. The approximation operators have a good property of maintaining union, intersection, complement operators, so the rough set theory has been enriched from the operator-oriented and set-oriented views. Approximate power set spaces are defined, and it is proved that the approximation operators are epimorphisms from power set space to approximate power set spaces. Some basic properties of approximate power set space are got by epimorphisms in contrast to power set space.
Approximation algorithms and hardness of approximation for knapsack problems
Buhrman, H.; Loff, B.; Torenvliet, L.
2012-01-01
We show various hardness of approximation algorithms for knapsack and related problems; in particular we will show that unless the Exponential-Time Hypothesis is false, then subset-sum cannot be approximated any better than with an FPTAS. We also give a simple new algorithm for approximating knapsac
Approximate nonlinear self-adjointness and approximate conservation laws
In this paper, approximate nonlinear self-adjointness for perturbed PDEs is introduced and its properties are studied. Consequently, approximate conservation laws which cannot be obtained by the approximate Noether theorem are constructed by means of the method. As an application, a class of perturbed nonlinear wave equations is considered to illustrate the effectiveness. (paper)
Optimal codes as Tanner codes with cyclic component codes
Høholdt, Tom; Pinero, Fernando; Zeng, Peng
2014-01-01
In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
$\\sigma $ -Approximately Contractible Banach Algebras
Momeni, M; Yazdanpanah, T.; Mardanbeigi, M. R.
2012-01-01
We investigate $\\sigma $ -approximate contractibility and $\\sigma $ -approximate amenability of Banach algebras, which are extensions of usual notions of contractibility and amenability, respectively, where $\\sigma $ is a dense range or an idempotent bounded endomorphism of the corresponding Banach algebra.
Approximation by planar elastic curves
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2015-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....
Approximate sine-Gordon solitons
Stratopoulos, G.N. (Dept. of Mathematical Sciences, Durham Univ. (United Kingdom)); Zakrzewski, W.J. (Dept. of Mathematical Sciences, Durham Univ. (United Kingdom))
1993-08-01
We look at the recently proposed scheme of approximating a sine-Gordon soliton by an expression derived from two dimensional instantons. We point out that the scheme of Sutcliffe in which he uses two dimensional instantons can be generalised to higher dimensions and that these generalisations produce even better approximations than the original approximation. We also comment on generalisations to other models. (orig.)
NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases
Exact constants in approximation theory
Korneichuk, N
1991-01-01
This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base
International Conference Approximation Theory XIV
Schumaker, Larry
2014-01-01
This volume developed from papers presented at the international conference Approximation Theory XIV, held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.
Delbecq, J.M
1999-07-01
The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)
Beyond Stabilizer Codes II: Clifford Codes
Klappenecker, Andreas; Roetteler, Martin
2000-01-01
Knill introduced a generalization of stabilizer codes, in this note called Clifford codes. It remained unclear whether or not Clifford codes can be superior to stabilizer codes. We show that Clifford codes are stabilizer codes provided that the abstract error group has an abelian index group. In particular, if the errors are modelled by tensor products of Pauli matrices, then the associated Clifford codes are necessarily stabilizer codes.
The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids
This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables
Legendre-tau approximations for functional differential equations
Ito, K.; Teglas, R.
1986-01-01
The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.
Wang, Jim Jing-Yan
2014-07-06
Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.
Approximate solutions for the skyrmion
Ponciano, J A; Fanchiotti, H; Canal-Garcia, C A
2001-01-01
We reconsider the Euler-Lagrange equation for the Skyrme model in the hedgehog ansatz and study the analytical properties of the solitonic solution. In view of the lack of a closed form solution to the problem, we work on approximate analytical solutions. We show that Pade approximants are well suited to continue analytically the asymptotic representation obtained in terms of a power series expansion near the origin, obtaining explicit approximate solutions for the Skyrme equations. We improve the approximations by applying the 2-point Pade approximant procedure whereby the exact behaviour at spatial infinity is incorporated. An even better convergence to the exact solution is obtained by introducing a modified form for the approximants. The new representations share the same analytical properties with the exact solution at both small and large values of the radial variable r.
The Smoothed Approximate Linear Program
Desai, V V; Moallemi, C C
2009-01-01
We present a novel linear program for the approximation of the dynamic programming cost-to-go function in high-dimensional stochastic control problems. LP approaches to approximate DP have typically relied on a natural `projection' of a well studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program--the `smoothed approximate linear program'--is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. Doing so appears to have several advantages: First, we demonstrate substantially superior bounds on the quality of approximation to the optimal cost-to-go function afforded by our approach. Second, experiments with our approach on a challenging problem (the game of Tetris) show that the approach outperforms the existing LP approach (which has previously been shown to be competitive with several AD...
Approximate Grammar for Information Extraction
Sriram, V; Reddy, B. Ravi Sekar; Sangal, R.
2003-01-01
In this paper, we present the concept of Approximate grammar and how it can be used to extract information from a documemt. As the structure of informational strings cannot be defined well in a document, we cannot use the conventional grammar rules to represent the information. Hence, the need arises to design an approximate grammar that can be used effectively to accomplish the task of Information extraction. Approximate grammars are a novel step in this direction. The rules of an approximat...
BDD Minimization for Approximate Computing
Soeken, Mathias; Grosse, Daniel; Chandrasekharan, Arun; Drechsler, Rolf
2016-01-01
We present Approximate BDD Minimization (ABM) as a problem that has application in approximate computing. Given a BDD representation of a multi-output Boolean function, ABM asks whether there exists another function that has a smaller BDD representation but meets a threshold w.r.t. an error metric. We present operators to derive approximated functions and present algorithms to exactly compute the error metrics directly on the BDD representation. An experimental evaluation demonstrates the app...
The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils
Beyond the random phase approximation
Olsen, Thomas; Thygesen, Kristian S.
2013-01-01
We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...
Matrix-Free Approximate Equilibration
Bradley, Andrew M.; Murray, Walter
2011-01-01
The condition number of a diagonally scaled matrix, for appropriately chosen scaling matrices, is often less than that of the original. Equilibration scales a matrix so that the scaled matrix's row and column norms are equal. Scaling can be approximate. We develop approximate equilibration algorithms for nonsymmetric and symmetric matrices having signed elements that access a matrix only by matrix-vector products.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
N-variable rational approximants
''Desirable properties'' of a two-variable generalization of Pade approximants are laid down. The ''Chisholm approximants'' are defined and are shown to obey nearly all of these properties; the alternative ways of completing a unique definition are discussed, and the ''prong structure'' of the defining equations is elucidated. Several generalizations and variants of Chisholm approximants are described: N-variable diagonal, 2-variable simple off-diagonal, N-variable simple and general off-diagonal, and rotationally covariant 2-variable approximants. All of the 2-variable approximants are capable of representing singularities of functions of two variables, and of analytically continuing beyond the polycylinder of convergence of the double series. 8 figures
NOVEL BIPHASE CODE -INTEGRATED SIDELOBE SUPPRESSION CODE
Wang Feixue; Ou Gang; Zhuang Zhaowen
2004-01-01
A kind of novel binary phase code named sidelobe suppression code is proposed in this paper. It is defined to be the code whose corresponding optimal sidelobe suppression filter outputs the minimum sidelobes. It is shown that there do exist sidelobe suppression codes better than the conventional optimal codes-Barker codes. For example, the sidelobe suppression code of length 11 with filter of length 39 has better sidelobe level up to 17dB than that of Barker code with the same code length and filter length.
An approach to cylindrical approximation of toroidal geometry
Neutron transport processes in Tokamak fusion devices are described with same mathematical equipment as that used in fission reactor calculations. The aim of this paper is to show some of these methods in toroidal geometry problem. A new approach to cylindrical approximation is described. All calculations are performed by ANISN one-dimensional Sn code. To validate the present method, comparison have been done with Monte Carlo results, as well as with calculations done on previous geometry approximation (author)
From concatenated codes to graph codes
Justesen, Jørn; Høholdt, Tom
2004-01-01
We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing in...
Concatenated codes with convolutional inner codes
Justesen, Jørn; Thommesen, Christian; Zyablov, Viktor
1988-01-01
The minimum distance of concatenated codes with Reed-Solomon outer codes and convolutional inner codes is studied. For suitable combinations of parameters the minimum distance can be lower-bounded by the product of the minimum distances of the inner and outer codes. For a randomized ensemble of...... concatenated codes a lower bound of the Gilbert-Varshamov type is proved...
Chebyshev polynomial approximation to approximate partial differential equations
Caporale, Guglielmo Maria; Cerrato, Mario
2008-01-01
This pa per suggests a simple method based on Chebyshev approximation at Chebyshev nodes to approximate partial differential equations. The methodology simply consists in determining the value function by using a set of nodes and basis functions. We provide two examples. Pricing an European option and determining the best policy for chatting down a machinery. The suggested method is flexible, easy to program and efficient. It is also applicable in other fields, providing efficient solutions t...
Safar, Anuar Mat; Aljunid, Syed Alwee; Arief, Amir Razif; Nordin, Junita; Saad, Naufal
2012-01-01
The use of minimal multiple access interference (MAI) in code design is investigated. Applying a projection and mapping techniques, a code that has a zero cross correlation (ZCC) between users in optical code division multiple access (OCDMA) is presented in this paper. The system is based on an incoherent light source—LED, spectral amplitude coding (SAC), and direct detection techniques at the receiver. Using power spectral density (PSD) function and Gaussian approximation, we obtain the signal-to-noise ratio (SNR) and the bit-error rate (BER) to measure the code performance. Making a comparison with other existing codes, e.g., Hadamard, MFH and MDW codes, we show that our code performs better at BER 10-9 in terms of number of simultaneous users. We also demonstrate the comparison between the theoretical and simulation analyses, where the results are close to one another.
Yoshida, Shin'ichirou
2012-01-01
We present a new numerical code to compute non-axisymmetric eigenmodes of rapidly rotating relativistic stars by adopting spatially conformally flat approximation of general relativity. The approximation suppresses the radiative degree of freedom of relativistic gravity and the field equations are cast into a set of elliptic equations. The code is tested against the low-order f- and p-modes of slowly rotating stars for which a good agreement is observed in frequencies computed by our new code...
The efficiency of Flory approximation
The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)
Approximate Reanalysis in Topology Optimization
Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole
2009-01-01
In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures is...... investigated. The nested optimization problem is re-formulated to accommodate the use of an approximate displacement vector and the design sensitivities are derived accordingly. It is shown that relatively rough approximations are acceptable since the errors are taken into account in the sensitivity analysis...
Dutta, Sagarmoy
2010-01-01
In this paper, we define and study \\emph{quantum cyclic codes}, a generalisation of cyclic codes to the quantum setting. Previously studied examples of quantum cyclic codes were all quantum codes obtained from classical cyclic codes via the CSS construction. However, the codes that we study are much more general. In particular, we construct cyclic stabiliser codes with parameters $[[5,1,3
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
Approximate maximizers of intricacy functionals
Buzzi, Jerome
2009-01-01
G. Edelman, O. Sporns, and G. Tononi introduced in theoretical biology the neural complexity of a family of random variables. This functional is a special case of intricacy, i.e., an average of the mutual information of subsystems whose weights have good mathematical properties. Moreover, its maximum value grows at a definite speed with the size of the system. In this work, we compute exactly this speed of growth by building "approximate maximizers" subject to an entropy condition. These approximate maximizers work simultaneously for all intricacies. We also establish some properties of arbitrary approximate maximizers, in particular the existence of a threshold in the size of subsystems of approximate maximizers: most smaller subsystems are almost equidistributed, most larger subsystems determine the full system. The main ideas are a random construction of almost maximizers with a high statistical symmetry and the consideration of entropy profiles, i.e., the average entropies of sub-systems of a given size. ...
Metrical Diophantine approximation for quaternions
Dodson, Maurice
2011-01-01
The metrical theory of Diophantine approximation for quaternions is developed using recent results in the general theory. In particular, Quaternionic analogues of the classical theorems of Khintchine, Jarnik and Jarnik-Besicovitch are established.
Metrical Diophantine approximation for quaternions
Dodson, Maurice; Everitt, Brent
2014-11-01
Analogues of the classical theorems of Khintchine, Jarnik and Jarnik-Besicovitch in the metrical theory of Diophantine approximation are established for quaternions by applying results on the measure of general `lim sup' sets.
Reinforcement Learning via AIXI Approximation
Veness, Joel; Ng, Kee Siong; Hutter, Marcus; Silver, David
2010-01-01
This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. This approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To deve...
Binary nucleation beyond capillarity approximation
Kalikmanov, V.I.
2010-01-01
Large discrepancies between binary classical nucleation theory (BCNT) and experiments result from adsorption effects and inability of BCNT, based on the phenomenological capillarity approximation, to treat small clusters. We propose a model aimed at eliminating both of these deficiencies. Adsorption is taken into account within Gibbsian approximation. Binary clusters are treated by means of statistical-mechanical considerations: tracing out the molecular degrees of freedom of the more volatil...
Approximate factorization with source terms
Shih, T. I.-P.; Chyu, W. J.
1991-01-01
A comparative evaluation is made of three methodologies with a view to that which offers the best approximate factorization error. While two of these methods are found to lead to more efficient algorithms in cases where factors which do not contain source terms can be diagonalized, the third method used generates the lowest approximate factorization error. This method may be preferred when the norms of source terms are large, and transient solutions are of interest.
Chebyshev approximation for multivariate functions
Sukhorukova, Nadezda; Ugon, Julien; Yost, David
2015-01-01
In this paper, we derive optimality conditions (Chebyshev approximation) for multivariate functions. The theory of Chebyshev (uniform) approximation for univariate functions is very elegant. The optimality conditions are based on the notion of alternance (maximal deviation points with alternating deviation signs). It is not very straightforward, however, how to extend the notion of alternance to the case of multivariate functions. There have been several attempts to extend the theory of Cheby...
Analytic Approximations for Spread Options
Carol Alexander; Aanand Venkatramanan
2007-01-01
Even in the simple case that two price processes follow correlated geometric Brownian motions with constant volatility no analytic formula for the price of a standard European spread option has been derived, except when the strike is zero in which case the option becomes an exchange option. This paper expresses the price of a spread option as the price of a compound exchange option and hence derives a new analytic approximation for its price and hedge ratios. This approximation has several ad...
Diffusion approximation for modeling of 3-D radiation distributions
A three-dimensional transport code DIF3D, based on the diffusion approximation, is used to model the spatial distribution of radiation energy arising from volumetric isotropic sources. Future work will be concerned with the determination of irradiances and modeling of realistic scenarios, relevant to the battlefield conditions. 8 refs., 4 figs
Using projections and correlations to approximate probability distributions
Karlen, D A
1998-01-01
A method to approximate continuous multi-dimensional probability density functions (PDFs) using their projections and correlations is described. The method is particularly useful for event classification when estimates of systematic uncertainties are required and for the application of an unbinned maximum likelihood analysis when an analytic model is not available. A simple goodness of fit test of the approximation can be used, and simulated event samples that follow the approximate PDFs can be efficiently generated. The source code for a FORTRAN-77 implementation of this method is available.
Implatation of MC2 computer code
The implantation of MC2 computer code in the CDC system is presented. The MC2 computer code calculates multigroup cross sections for tipical compositions of fast reactors. The multigroup constants are calculated using solutions of PI or BI approximations for determined buckling value as weighting function. (M.C.K.)
Comparing numerical and analytic approximate gravitational waveforms
Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration
2016-03-01
A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.
Wavelet Sparse Approximate Inverse Preconditioners
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
Shearlets and Optimally Sparse Approximations
Kutyniok, Gitta; Lim, Wang-Q
2011-01-01
Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations of such functions. Recently, cartoon-like images were introduced in 2D and 3D as a suitable model class, and approximation properties were measured by considering the decay rate of the $L^2$ error of the best $N$-term approximation. Shearlet systems are to date the only representation system, which provide optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported sh...
Relativistic regular approximations revisited: An infinite-order relativistic approximation
The concept of the regular approximation is presented as the neglect of the energy dependence of the exact Foldy - Wouthuysen transformation of the Dirac Hamiltonian. Expansion of the normalization terms leads immediately to the zeroth-order regular approximation (ZORA) and first-order regular approximation (FORA) Hamiltonians as the zeroth- and first-order terms of the expansion. The expansion may be taken to infinite order by using an un-normalized Foldy - Wouthuysen transformation, which results in the ZORA Hamiltonian and a non-unit metric. This infinite-order regular approximation, IORA, has eigenvalues which differ from the Dirac eigenvalues by order E3/c4 for a hydrogen-like system, which is a considerable improvement over the ZORA eigenvalues, and similar to the non-variational FORA energies. A further perturbation analysis yields a third-order correction to the IORA energies, TIORA. Results are presented for several systems including the neutral U atom. The IORA eigenvalues for all but the 1s spinor of the neutral system are superior even to the scaled ZORA energies, which are exact for the hydrogenic system. The third-order correction reduces the IORA error for the inner orbitals to a very small fraction of the Dirac eigenvalue. copyright 1999 American Institute of Physics
Quantum Convolutional BCH Codes
Aly, S A; Klappenecker, A; Roetteler, M; Sarvepalli, P K; Aly, Salah A.; Grassl, Markus; Klappenecker, Andreas; Roetteler, Martin; Sarvepalli, Pradeep Kiran
2007-01-01
Quantum convolutional codes can be used to protect a sequence of qubits of arbitrary length against decoherence. We introduce two new families of quantum convolutional codes. Our construction is based on an algebraic method which allows to construct classical convolutional codes from block codes, in particular convolutional BCH codes. These codes have the property that they contain their Euclidean, respectively Hermitian, dual codes. Hence, they can be used to define quantum convolutional codes by the stabilizer code construction. We compute BCH-like bounds on the free distances which can be controlled as in the case of block codes, and establish that the codes have non-catastrophic encoders.
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
Concept Approximation between Fuzzy Ontologies
无
2006-01-01
Fuzzy ontologies are efficient tools to handle fuzzy and uncertain knowledge on the semantic web; but there are heterogeneity problems when gaining interoperability among different fuzzy ontologies. This paper uses concept approximation between fuzzy ontologies based on instances to solve the heterogeneity problems. It firstly proposes an instance selection technology based on instance clustering and weighting to unify the fuzzy interpretation of different ontologies and reduce the number of instances to increase the efficiency. Then the paper resolves the problem of computing the approximations of concepts into the problem of computing the least upper approximations of atom concepts. It optimizes the search strategies by extending atom concept sets and defining the least upper bounds of concepts to reduce the searching space of the problem. An efficient algorithm for searching the least upper bounds of concept is given.
An Approximation Ratio for Biclustering
Puolamäki, Kai; Hanhijärvi, Sami; Garriga, Gemma C
2007-01-01
The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+sqrt(2) under L1-norm for 0-1 valued matrices, and of 2...
An Approximation Ratio for Biclustering
Puolamäki, Kai; Garriga, Gemma C
2007-01-01
The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+sqrt(2) under L1-norm for 0-1 valued matrices, and of 2 under L2-norm for real valued matrices.
Shearlets and Optimally Sparse Approximations
Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q
Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations of...... provide optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an...
Hamada, M
2006-01-01
A conjugate code pair is defined as a pair of linear codes either of which contains the dual of the other. A conjugate code pair represents the essential structure of the corresponding Calderbank-Shor-Steane (CSS) quantum code. It is known that conjugate code pairs are applicable to (quantum) cryptography. We give a construction method for efficiently decodable conjugate code pairs.
Fundamentals of convolutional coding
Johannesson, Rolf
2015-01-01
Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual
Approximate Reasoning with Fuzzy Booleans
Broek, van den P.M.; Noppen, J.A.R.
2004-01-01
This paper introduces, in analogy to the concept of fuzzy numbers, the concept of fuzzy booleans, and examines approximate reasoning with the compositional rule of inference using fuzzy booleans. It is shown that each set of fuzzy rules is equivalent to a set of fuzzy rules with singleton crisp ante
Truthful approximations to range voting
Filos-Ratsika, Aris; Miltersen, Peter Bro
We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare maximi...
Analytical Approximations to Galaxy Clustering
Mo, H. J.
1997-01-01
We discuss some recent progress in constructing analytic approximations to the galaxy clustering. We show that successful models can be constructed for the clustering of both dark matter and dark matter haloes. Our understanding of galaxy clustering and galaxy biasing can be greatly enhanced by these models.
Ultrafast Approximation for Phylogenetic Bootstrap
Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt
2013-01-01
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and
Approximation by Penultimate Stable Laws
L.F.M. de Haan (Laurens); L. Peng (Liang); H. Iglesias Pereira
1997-01-01
textabstractIn certain cases partial sums of i.i.d. random variables with finite variance are better approximated by a sequence of stable distributions with indices \\\\alpha_n \\\\to 2 than by a normal distribution. We discuss when this happens and how much the convergence rate can be improved by using
Approximation properties of haplotype tagging
Dreiseitl Stephan
2006-01-01
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.
Impact of inflow transport approximation on light water reactor analysis
Choi, Sooyoung; Smith, Kord; Lee, Hyun Chul; Lee, Deokjung
2015-10-01
The impact of the inflow transport approximation on light water reactor analysis is investigated, and it is verified that the inflow transport approximation significantly improves the accuracy of the transport and transport/diffusion solutions. A methodology for an inflow transport approximation is implemented in order to generate an accurate transport cross section. The inflow transport approximation is compared to the conventional methods, which are the consistent-PN and the outflow transport approximations. The three transport approximations are implemented in the lattice physics code STREAM, and verification is performed for various verification problems in order to investigate their effects and accuracy. From the verification, it is noted that the consistent-PN and the outflow transport approximations cause significant error in calculating the eigenvalue and the power distribution. The inflow transport approximation shows very accurate and precise results for the verification problems. The inflow transport approximation shows significant improvements not only for the high leakage problem but also for practical large core problem analyses.
Low Rank Approximation in $G_0W_0$ Approximation
Shao, Meiyue; Yang, Chao; Liu, Fang; da Jornada, Felipe H; Deslippe, Jack; Louie, Steven G
2016-01-01
The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cos...
Gillespie, Neil I.; Praeger, Cheryl E.; Spiga, Pablo
2014-01-01
We introduce twisted permutation codes, which are frequency permutation arrays analogous to repetition permutation codes, namely, codes obtained from the repetition construction applied to a permutation code. In particular, we show that a lower bound for the minimum distance of a twisted permutation code is the minimum distance of a repetition permutation code. We give examples where this bound is tight, but more importantly, we give examples of twisted permutation codes with minimum distance...
Transitive nonpropelinear perfect codes
Mogilnykh, I. Yu.; Solov'eva, F. I.
2014-01-01
A code is called transitive if its automorphism group (the isometry group) of the code acts transitively on its codewords. If there is a subgroup of the automorphism group acting regularly on the code, the code is called propelinear. Using Magma software package we establish that among 201 equivalence classes of transitive perfect codes of length 15 from \\cite{ost} there is a unique nonpropelinear code. We solve the existence problem for transitive nonpropelinear perfect codes for any admissi...
Approximate Matching of Hierarchial Data
Augsten, Nikolaus
formally proof that the pq-gram index can be incrementally updated based on the log of edit operations without reconstructing intermediate tree versions. The incremental update is independent of the data size and scales to a large number of changes in the data. We introduce windowed pq-grams for the......-gram based distance between streets, introduces a global greedy matching that guarantees stable pairs, and links addresses that are stored with different granularity. The connector has been successfully tested with public administration databases. Our extensive experiments on both synthetic and real world......The goal of this thesis is to design, develop, and evaluate new methods for the approximate matching of hierarchical data represented as labeled trees. In approximate matching scenarios two items should be matched if they are similar. Computing the similarity between labeled trees is hard as in...
Approximate Privacy: Foundations and Quantification
Feigenbaum, Joan; Schapira, Michael
2009-01-01
Increasing use of computers and networks in business, government, recreation, and almost all aspects of daily life has led to a proliferation of online sensitive data about individuals and organizations. Consequently, concern about the privacy of these data has become a top priority, particularly those data that are created and used in electronic commerce. There have been many formulations of privacy and, unfortunately, many negative results about the feasibility of maintaining privacy of sensitive data in realistic networked environments. We formulate communication-complexity-based definitions, both worst-case and average-case, of a problem's privacy-approximation ratio. We use our definitions to investigate the extent to which approximate privacy is achievable in two standard problems: the second-price Vickrey auction and the millionaires problem of Yao. For both the second-price Vickrey auction and the millionaires problem, we show that not only is perfect privacy impossible or infeasibly costly to achieve...
Hydrogen: Beyond the Classic Approximation
The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position
Concentration Bounds for Stochastic Approximations
Frikha, Noufel
2012-01-01
We obtain non asymptotic concentration bounds for two kinds of stochastic approximations. We first consider the deviations between the expectation of a given function of the Euler scheme of some diffusion process at a fixed deterministic time and its empirical mean obtained by the Monte-Carlo procedure. We then give some estimates concerning the deviation between the value at a given time-step of a stochastic approximation algorithm and its target. Under suitable assumptions both concentration bounds turn out to be Gaussian. The key tool consists in exploiting accurately the concentration properties of the increments of the schemes. For the first case, as opposed to the previous work of Lemaire and Menozzi (EJP, 2010), we do not have any systematic bias in our estimates. Also, no specific non-degeneracy conditions are assumed.
Waveless Approximation Theories of Gravity
Isenberg, J A
2007-01-01
The analysis of a general multibody physical system governed by Einstein's equations in quite difficult, even if numerical methods (on a computer) are used. Some of the difficulties -- many coupled degrees of freedom, dynamic instability -- are associated with the presence of gravitational waves. We have developed a number of ``waveless approximation theories'' (WAT) which repress the gravitational radiation and thereby simplify the analysis. The matter, according to these theories, evolves dynamically. The gravitational field, however, is determined at each time step by a set of elliptic equations with matter sources. There is reason to believe that for many physical systems, the WAT-generated system evolution is a very accurate approximation to that generated by the full Einstein theory.
On Approximability of Block Sorting
Narayanaswamy, N S
2011-01-01
Block Sorting is a well studied problem, motivated by its applications in Optical Character Recognition (OCR), and Computational Biology. Block Sorting has been shown to be NP-Hard, and two separate polynomial time 2-approximation algorithms have been designed for the problem. But questions like whether a better approximation algorithm can be designed, and whether the problem is APX-Hard have been open for quite a while now. In this work we answer the latter question by proving Block Sorting to be Max-SNP-Hard (APX-Hard). The APX-Hardness result is based on a linear reduction of Max-3SAT to Block Sorting. We also provide a new lower bound for the problem via a new parametrized problem k-Block Merging.
Variance approximation under balanced sampling
Deville, Jean-Claude; Tillé, Yves
2016-01-01
A balanced sampling design has the interesting property that Horvitz–Thompson estimators of totals for a set of balancing variables are equal to the totals we want to estimate, therefore the variance of Horvitz–Thompson estimators of variables of interest are reduced in function of their correlations with the balancing variables. Since it is hard to derive an analytic expression for the joint inclusion probabilities, we derive a general approximation of variance based on a residual technique....
Approximating Metal-Insulator Transitions
Danieli, C.; Rayanov, K.; Pavlov, B.; Martin, G.; Flach, S
2014-01-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate metal-insulator transitions (MIT) at the finite iteration steps. We also report evidence on mobility ed...
Saddlepoint approximations to option prices
Rogers, L. C. G.; Zane, O.
1999-01-01
The use of saddlepoint approximations in statistics is a well-established technique for computing the distribution of a random variable whose moment generating function is known. In this paper, we apply the methodology to computing the prices of various European-style options, whose returns processes are not the Brownian motion with drift assumed in the Black-Scholes paradigm. Through a number of examples, we show that the methodology is generally accurate and fast.
Approximate maximizers of intricacy functionals
Buzzi, Jerome; Zambotti, Lorenzo
2009-01-01
G. Edelman, O. Sporns, and G. Tononi introduced in theoretical biology the neural complexity of a family of random variables. This functional is a special case of intricacy, i.e., an average of the mutual information of subsystems whose weights have good mathematical properties. Moreover, its maximum value grows at a definite speed with the size of the system. In this work, we compute exactly this speed of growth by building "approximate maximizers" subject to an entropy condition. These appr...
Stochastic approximation algorithms and applications
Kushner, Harold J
1997-01-01
In recent years algorithms of the stochastic approximation type have found applications in new and diverse areas, and new techniques have been developed for proofs of convergence and rate of convergence. The actual and potential applications in signal processing have exploded. New challenges have arisen in applications to adaptive control. This book presents a thorough coverage of the ODE method used to analyze these algorithms.
Quantum Tunneling Beyond Semiclassical Approximation
Banerjee, Rabin; Majhi, Bibhas Ranjan
2008-01-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black h...
Approximate quantum and acoustic cloaking
Greenleaf, Allan; Lassas, Matti; Uhlmann, Gunther
2008-01-01
At any energy E > 0, we construct a sequence of bounded potentials $V^E_{n}, n\\in\\N$, supported in an annular region $B_{out}\\setminus B_{inn}$ in three-space, which act as approximate cloaks for solutions of Schr\\"odinger's equation: For any potential $V_0\\in L^\\infty(B_{inn})$ such that E is not a Neumann eigenvalue of $-\\Delta+V_0$ in $B_{inn}$, the scattering amplitudes $a_{V_0+V_n^E}(E,\\theta,\\omega)\\to 0$ as $n\\to\\infty$. The $V^E_{n}$ thus not only form a family of approximately transparent potentials, but also function as approximate invisibility cloaks in quantum mechanics. On the other hand, for $E$ close to interior eigenvalues, resonances develop and there exist {\\it almost trapped states} concentrated in $B_{inn}$. We derive the $V_n^E$ from singular, anisotropic transformation optics-based cloaks by a de-anisotropization procedure, which we call \\emph{isotropic transformation optics}. This technique uses truncation, inverse homogenization and spectral theory to produce nonsingular, isotropic app...
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev’s toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev’s toric code or to the topological color codes. - Highlights: ► We show that Kitaev’s toric codes are equivalent to homological stabilizer codes on 4-valent graphs. ► We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. ► We find and classify all 2D homological stabilizer codes. ► We find optimal codes among the homological stabilizer codes.
Radiative transfer in disc galaxies $-$ V. The accuracy of the KB approximation
Lee, Dukhang; Seon, Kwang-Il; Camps, Peter; Verstocken, Sam; Han, Wonyong
2016-01-01
We investigate the accuracy of an approximate radiative transfer technique that was first proposed by Kylafis & Bahcall (hereafter the KB approximation) and has been popular in modelling dusty late-type galaxies. We compare realistic galaxy models calculated with the KB approximation with those of a three-dimensional Monte Carlo radiative transfer code SKIRT. The SKIRT code fully takes into account of the contribution of multiple scattering whereas the KB approximation calculates only single scattered intensity and multiple scattering components are approximated. We find that the KB approximation gives fairly accurate results if optically thin, face-on galaxies are considered. However, for highly inclined ($i \\gtrsim 85^{\\circ}$) and/or optically thick (central face-on optical depth $\\gtrsim1$) galaxy models, the approximation can give rise to substantial errors, sometimes, up to $\\gtrsim 40\\%$. Moreover, it is also found that the KB approximation is not always physical, sometimes producing infinite inten...
Product Approximation of Grade and Precision
ZHANG Xian-yong; MO Zhi-wen
2005-01-01
The normal graded approximation and variable precision approximation are defined in approximate space. The relationship between graded approximation and variable precision approximation is studied, and an important formula of conversion between them is achieved. The product approximation of gradeand precision is defined and its basic properties are studied.
Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations
Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [Nuclear Research Centre ' ' Kurchatov Institute' ' , Moscow (Russian Federation)
2015-09-15
A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Turbo Codes Extended with Outer BCH Code
Andersen, Jakob Dahl
1996-01-01
The "error floor" observed in several simulations with the turbo codes is verified by calculation of an upper bound to the bit error rate for the ensemble of all interleavers. Also an easy way to calculate the weight enumerator used in this bound is presented. An extended coding scheme is proposed...... including an outer BCH code correcting a few bit errors....
Generalized gradient approximation made simple
Generalized gradient approximations Exc = ∫ d3 r f(n↑, n↓, triangledown n↑, triangledown n↓) for the exchange-correlation energy typically surpass the accuracy of the local spin density approximation and compete with standard quantum-chemical methods in electronic-structure calculations. But the derivation and analytic expression for the integrand f tend to be complicated and over-parametrized. We present a simple derivation of a simple but accurate expression for f, involving no parameter other than fundamental-constants. The derivation invoices only general ideas (not details) of the real-space cutoff construction, and agrees closely with the result of this construction. Besides its greater simplicity, this PBE96 functional has other advantages over PW91: (1) The correct behavior of the correlation energy is recovered under uniform scaling to the high-density limit. (2) The linear response of the uniform electron gas agrees with the accurate local spin density prediction. 96:006128*1 Paper TuI 6 Many-body effects are hidden in the universal density functional. The interaction of degenerate states via two-body operators, such as the electron-electron repulsion (for describing multiplets or the interaction of molecular fragments at large separations) are thus not explicitly considered in the Kohn-Sham scheme. In practice the density functionals have to be approximated, and there is a fundamental difficulty which arises in the case of degeneracy. While density functionals should be universal, the effect of degeneracy is linked to the potential characteristic to the atom, molecule, or crystal. There are, however, several possibilities to treat degeneracy effects within density functional theory, a few of which will be discussed. These take profit of the use of two-body operators, which can be, but must not be, the physical electron-electron interaction
Compressive Hyperspectral Imaging via Approximate Message Passing
Tan, Jin; Ma, Yanting; Rueda, Hoover; Baron, Dror; Arce, Gonzalo R.
2016-03-01
We consider a compressive hyperspectral imaging reconstruction problem, where three-dimensional spatio-spectral information about a scene is sensed by a coded aperture snapshot spectral imager (CASSI). The CASSI imaging process can be modeled as suppressing three-dimensional coded and shifted voxels and projecting these onto a two-dimensional plane, such that the number of acquired measurements is greatly reduced. On the other hand, because the measurements are highly compressive, the reconstruction process becomes challenging. We previously proposed a compressive imaging reconstruction algorithm that is applied to two-dimensional images based on the approximate message passing (AMP) framework. AMP is an iterative algorithm that can be used in signal and image reconstruction by performing denoising at each iteration. We employed an adaptive Wiener filter as the image denoiser, and called our algorithm "AMP-Wiener." In this paper, we extend AMP-Wiener to three-dimensional hyperspectral image reconstruction, and call it "AMP-3D-Wiener." Applying the AMP framework to the CASSI system is challenging, because the matrix that models the CASSI system is highly sparse, and such a matrix is not suitable to AMP and makes it difficult for AMP to converge. Therefore, we modify the adaptive Wiener filter and employ a technique called damping to solve for the divergence issue of AMP. Our approach is applied in nature, and the numerical experiments show that AMP-3D-Wiener outperforms existing widely-used algorithms such as gradient projection for sparse reconstruction (GPSR) and two-step iterative shrinkage/thresholding (TwIST) given a similar amount of runtime. Moreover, in contrast to GPSR and TwIST, AMP-3D-Wiener need not tune any parameters, which simplifies the reconstruction process.
Quantum Tunneling Beyond Semiclassical Approximation
Banerjee, Rabin
2008-01-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
Quantum tunneling beyond semiclassical approximation
Banerjee, Rabin; Ranjan Majhi, Bibhas
2008-06-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
Fermion Tunneling Beyond Semiclassical Approximation
Majhi, Bibhas Ranjan
2008-01-01
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in \\cite{Majhi3} for the scalar particle, Hawking radiation as tunneling of Dirac particle through an event horizon is analysed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
Fermion tunneling beyond semiclassical approximation
Majhi, Bibhas Ranjan
2009-02-01
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
Rollout Sampling Approximate Policy Iteration
Dimitrakakis, Christos
2008-01-01
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.
The distorted wave Glauber approximation
A solution of the Pauli equation with non-zero potentials defines quantum scalar and vector potentials and magnetic fields and quantum trajectories. If a line integral of perturbing potentials and fields along these quantum trajectories is added to the phase of this solution, an approximate solution of the perturbed equation is found. Glauber theory is a special case and the conditions of applicability are similar. Applications given start from the harmonic oscillator and from a homogeneous magnetic field and add a perturbation. (author)
The structural physical approximation conjecture
Shultz, Fred
2016-01-01
It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.
Rotating wave approximation and entropy
This Letter studies composite quantum systems, like atom-cavity systems and coupled optical resonators, in the absence of external driving by resorting to methods from quantum field theory. Going beyond the rotating wave approximation, it is shown that the usually neglected counter-rotating part of the Hamiltonian relates to the entropy operator and generates an irreversible time evolution. The vacuum state of the system is shown to evolve into a generalized coherent state exhibiting entanglement of the modes in which the counter-rotating terms are expressed. Possible consequences at observational level in quantum optics experiments are currently under study.
Approximation of Surfaces by Cylinders
Randrup, Thomas
1998-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...... projection of the surface onto this plane, a reference curve is determined by use of methods for thinning of binary images. Finally, the cylinder surface is constructed as follows: the directrix of the cylinder surface is determined by a least squares method minimizing the distance to the points in the...
Park, Brooke Anderson; Wright, Henry
2012-01-01
PatCon code was developed to help mission designers run trade studies on launch and arrival times for any given planet. Initially developed in Fortran, the required inputs included launch date, arrival date, and other orbital parameters of the launch planet and arrival planets at the given dates. These parameters include the position of the planets, the eccentricity, semi-major axes, argument of periapsis, ascending node, and inclination of the planets. With these inputs, a patched conic approximation is used to determine the trajectory. The patched conic approximation divides the planetary mission into three parts: (1) the departure phase, in which the two relevant bodies are Earth and the spacecraft, and where the trajectory is a departure hyperbola with Earth at the focus; (2) the cruise phase, in which the two bodies are the Sun and the spacecraft, and where the trajectory is a transfer ellipse with the Sun at the focus; and (3) the arrival phase, in which the two bodies are the target planet and the spacecraft, where the trajectory is an arrival hyperbola with the planet as the focus.
Wavelet Approximation in Data Assimilation
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Høholdt, Tom; Beelen, Peter; Ghorpade, Sudhir Ramakant
2010-01-01
We consider a new class of linear codes, called affine Grassmann codes. These can be viewed as a variant of generalized Reed-Muller codes and are closely related to Grassmann codes.We determine the length, dimension, and the minimum distance of any affine Grassmann code. Moreover, we show that...... affine Grassmann codes have a large automorphism group and determine the number of minimum weight codewords....
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
2012-01-01
This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both the...... coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....
Gao, Wen
2015-01-01
This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV
M. Markkanen
2008-08-01
Full Text Available We present a new class of alternating codes. Instead of the customary binary phase codes, the new codes utilize either p or p–1 phases, where p is a prime number. The first class of codes has code length p^{m}, where m is a positive integer, the second class has code length p–1. We give an actual construction algorithm, and explain the principles behind it. We handle a few specific examples in detail. The new codes offer an enlarged collection of code lengths for radar experiments.
Abraham, Nikhil
2015-01-01
Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill
Approximation Error Based Suitable Domain Search for Fractal Image Compression
Vijayshri Chaurasia
2010-02-01
Full Text Available Fractal Image compression is a very advantageous technique in the field of image compression. The coding phase of this technique is very time consuming because of computational expenses of suitable domain search. In this paper we have proposed an approximation error based speed-up technique with the use of feature extraction. Proposed scheme reduces the number of range-domain comparisons with significant amount and gives improved time performance.
Coupling CFD code with system code and neutron kinetic code
Vyskocil, Ladislav, E-mail: Ladislav.Vyskocil@ujv.cz; Macek, Jiri
2014-11-15
Highlights: • Coupling interface between CFD code Fluent and system code Athlet was created. • Athlet code is internally coupled with neutron kinetic code Dyn3D. • Explicit coupling of overlapped computational domains was used. • A coupled system of Athlet/Dyn3D+Fluent codes was successfully tested on a real case. - Abstract: The aim of this work was to develop the coupling interface between CFD code Fluent and system code Athlet internally coupled with neutron kinetic code Dyn3D. The coupling interface is intended for simulation of complex transients such as Main Steam Line Break scenarios, which cannot be modeled separately first by system and neutron kinetic code and then by CFD code, because of the feedback between the codes. In the first part of this article, the coupling method is described. Explicit coupling of overlapped computational domains is used in this work. The second part of the article presents a demonstration simulation performed by the coupled system of Athlet/Dyn3D and Fluent. The “Opening a Steam Dump to the Atmosphere” test carried out at the Temelin NPP (VVER-1000) was simulated by the coupled system. In this simulation, the primary and secondary circuits were modeled by Athlet, mixing in downcomer and lower plenum was simulated by Fluent and heat generation in the core was calculated by Dyn3D. The results of the simulation with Athlet/Dyn3D+Fluent were compared with the experimental data and the results from a calculation performed with Athlet/Dyn3D without Fluent.
Coupling CFD code with system code and neutron kinetic code
Highlights: • Coupling interface between CFD code Fluent and system code Athlet was created. • Athlet code is internally coupled with neutron kinetic code Dyn3D. • Explicit coupling of overlapped computational domains was used. • A coupled system of Athlet/Dyn3D+Fluent codes was successfully tested on a real case. - Abstract: The aim of this work was to develop the coupling interface between CFD code Fluent and system code Athlet internally coupled with neutron kinetic code Dyn3D. The coupling interface is intended for simulation of complex transients such as Main Steam Line Break scenarios, which cannot be modeled separately first by system and neutron kinetic code and then by CFD code, because of the feedback between the codes. In the first part of this article, the coupling method is described. Explicit coupling of overlapped computational domains is used in this work. The second part of the article presents a demonstration simulation performed by the coupled system of Athlet/Dyn3D and Fluent. The “Opening a Steam Dump to the Atmosphere” test carried out at the Temelin NPP (VVER-1000) was simulated by the coupled system. In this simulation, the primary and secondary circuits were modeled by Athlet, mixing in downcomer and lower plenum was simulated by Fluent and heat generation in the core was calculated by Dyn3D. The results of the simulation with Athlet/Dyn3D+Fluent were compared with the experimental data and the results from a calculation performed with Athlet/Dyn3D without Fluent
Simple approximations for condensational growth
Kostinski, A B [Michigan Technological University, 1400 Townsend Drive, Houghton, MI 49931-1200 (United States)], E-mail: alex.kostinski@mtu.edu
2009-01-15
A simple geometric argument relating to the liquid water content of clouds is given. The phase relaxation time and the nature of the quasi-steady approximation for the diffusional growth of cloud drops are elucidated directly in terms of water vapor concentration. Spatial gradients of vapor concentration, inherent in the notion of quasi-steady growth, are discussed and we argue for an occasional reversal of the traditional point of view: rather than a drop growing in response to a given supersaturation, the observed values of the supersaturation in clouds are the result of a vapor field adjusting to droplet growth. Our perspective is illustrated by comparing the exponential decay of condensation trails with a quasi-steady regime of cirrus clouds. The role of aerosol loading in decreasing relaxation times and increasing the rate of growth of the liquid water content is also discussed.
Strong shock implosion, approximate solution
Fujimoto, Y.; Mishkin, E. A.; Alejaldre, C.
1983-01-01
The self-similar, center-bound motion of a strong spherical, or cylindrical, shock wave moving through an ideal gas with a constant, γ= cp/ cv, is considered and a linearized, approximate solution is derived. An X, Y phase plane of the self-similar solution is defined and the representative curved of the system behind the shock front is replaced by a straight line connecting the mappings of the shock front with that of its tail. The reduced pressure P(ξ), density R(ξ) and velocity U1(ξ) are found in closed, quite accurate, form. Comparison with numerically obtained results, for γ= {5}/{3} and γ= {7}/{5}, is shown.
Stochastic Approximation with Averaging Innovation
Laruelle, Sophie
2010-01-01
The aim of the paper is to establish a convergence theorem for multi-dimensional stochastic approximation in a setting with innovations satisfying some averaging properties and to study some applications. The averaging assumptions allow us to unify the framework where the innovations are generated (to solve problems from Numerical Probability) and the one with exogenous innovations (market data, output of "device" $e.g.$ an Euler scheme) with stationary or ergodic properties. We propose several fields of applications with random innovations or quasi-random numbers. In particular we provide in both setting a rule to tune the step of the algorithm. At last we illustrate our results on five examples notably in Finance.
Benchmarking Declarative Approximate Selection Predicates
Hassanzadeh, Oktie
2009-01-01
Declarative data quality has been an active research topic. The fundamental principle behind a declarative approach to data quality is the use of declarative statements to realize data quality primitives on top of any relational data source. A primary advantage of such an approach is the ease of use and integration with existing applications. Several similarity predicates have been proposed in the past for common quality primitives (approximate selections, joins, etc.) and have been fully expressed using declarative SQL statements. In this thesis, new similarity predicates are proposed along with their declarative realization, based on notions of probabilistic information retrieval. Then, full declarative specifications of previously proposed similarity predicates in the literature are presented, grouped into classes according to their primary characteristics. Finally, a thorough performance and accuracy study comparing a large number of similarity predicates for data cleaning operations is performed.
Narrow-width approximation accuracy
A study of general properties of the narrow-width approximation (NWA) with polarization/spin decorrelation is presented. We prove for sufficiently inclusive differential rates of arbitrary resonant decay or scattering processes with an on-shell intermediate state decaying via a cubic or quartic vertex that decorrelation effects vanish and the NWA is of order Γ. Its accuracy is then determined numerically for all resonant 3-body decays involving scalars, spin-1/2 fermions or vector bosons. We specialize the general results to MSSM benchmark scenarios. Significant off-shell corrections can occur - similar in size to QCD corrections. We qualify the configurations in which a combined consideration is advisable. For this purpose, we also investigate process-independent methods to improve the NWA
Reconstruction within the Zeldovich approximation
White, Martin
2015-01-01
The Zeldovich approximation, 1st order Lagrangian perturbation theory, provides a good description of the clustering of matter and galaxies on large scales. The acoustic feature in the large-scale correlation function of galaxies imprinted by sound waves in the early Universe has been successfully used as a `standard ruler' to constrain the expansion history of the Universe. The standard ruler can be improved if a process known as density field reconstruction is employed. In this paper we develop the Zeldovich formalism to compute the correlation function of biased tracers in both real- and redshift-space using the simplest reconstruction algorithm with a Gaussian kernel and compare to N-body simulations. The model qualitatively describes the effects of reconstruction on the simulations, though its quantitative success depends upon how redshift-space distortions are handled in the reconstruction algorithm.
Approximating metal-insulator transitions
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
Diophantine approximations and Diophantine equations
Schmidt, Wolfgang M
1991-01-01
"This book by a leading researcher and masterly expositor of the subject studies diophantine approximations to algebraic numbers and their applications to diophantine equations. The methods are classical, and the results stressed can be obtained without much background in algebraic geometry. In particular, Thue equations, norm form equations and S-unit equations, with emphasis on recent explicit bounds on the number of solutions, are included. The book will be useful for graduate students and researchers." (L'Enseignement Mathematique) "The rich Bibliography includes more than hundred references. The book is easy to read, it may be a useful piece of reading not only for experts but for students as well." Acta Scientiarum Mathematicarum
Dodgson's Rule Approximations and Absurdity
McCabe-Dansted, John C
2010-01-01
With the Dodgson rule, cloning the electorate can change the winner, which Young (1977) considers an "absurdity". Removing this absurdity results in a new rule (Fishburn, 1977) for which we can compute the winner in polynomial time (Rothe et al., 2003), unlike the traditional Dodgson rule. We call this rule DC and introduce two new related rules (DR and D&). Dodgson did not explicitly propose the "Dodgson rule" (Tideman, 1987); we argue that DC and DR are better realizations of the principle behind the Dodgson rule than the traditional Dodgson rule. These rules, especially D&, are also effective approximations to the traditional Dodgson's rule. We show that, unlike the rules we have considered previously, the DC, DR and D& scores differ from the Dodgson score by no more than a fixed amount given a fixed number of alternatives, and thus these new rules converge to Dodgson under any reasonable assumption on voter behaviour, including the Impartial Anonymous Culture assumption.
Approximate analytic solutions to the NPDD: Short exposure approximations
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
In this report some validation tests for the TESEO code are described. The TESEO code was developed at ENEA - Clementel Center in the framework of the C2RV code sequence. This code sequence produces multigroup resonance cross sections for fast reactor analysis. It consists of the codes TESEO, MC2-II, GERES, ANISN, MEDIL. The TESEO code processes basic nuclear data in ENDF-B format and produces an ultrafine group (2082 groups) cross section library for the MC2-II code. To validate the TESEO algorithms, the data produced by TESEO code were compared with the data produced by other well-tested codes which use different algorithms. No substantial differences was found between these data and the data produced by TESEO code. TESEO algorithms showed high reliability. A detailed study of TESEO calculation options was carried out. Their use and functions are shown to inform the user of the code
Locally Orderless Registration Code
2012-01-01
This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....
Fragouli, C.; Soljanin, E.
2004-01-01
This paper proposes deterministic algorithms for decentralized network coding. Decentralized coding allows to locally specify the coding operations at network nodes without knowledge of the overall network topology, and to accommodate future changes in the network such as addition of receivers. To the best of our knowledge, these are the first deterministic decentralized algorithms proposed for network coding.
Multidimensional stochastic approximation Monte Carlo.
Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383
Decision analysis with approximate probabilities
Whalen, Thomas
1992-01-01
This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.
Function approximation in inhibitory networks.
Tripp, Bryan; Eliasmith, Chris
2016-05-01
In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex. PMID:26963256
Andersen, Christian Ulrik
2007-01-01
code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... discusses code as the artist’s material and, further, formulates a critique of Cramer. The seductive magic in computer-generated art does not lie in the magical expression, but nor does it lie in the code/material/text itself. It lies in the nature of code to do something – as if it was magic: in the...
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
Markkanen, Markku
2007-01-01
This work introduces a method for constructing polyphase alternating codes in which the length of a code transmission cycle can be $p^m$ or $p-1$, where $p$ is a prime number and $m$ is a positive integer. The relevant properties leading to the construction alternating codes and the algorithm for generating alternating codes is described. Examples of all practical and some not that practical polyphase code lengths are given.
2008-01-01
Quantum error correcting codes are indispensable for quantum information processing and quantum computation.In 1995 and 1996,Shor and Steane gave first several examples of quantum codes from classical error correcting codes.The construction of efficient quantum codes is now an active multi-discipline research field.In this paper we review the known several constructions of quantum codes and present some examples.
Fuzzy Set Approximations in Fuzzy Formal Contexts
Mingwen Shao; Shiqing Fan
2006-01-01
In this paper, a kind of multi-level formal concept is introduced. Based on the proposed multi-level formal concept, we present a pair of rough fuzzy set approximations within fuzzy formal contexts. By the proposed rough fuzzy set approximations, we can approximate a fuzzy set according to different precision level. We discuss the properties of the proposed approximation operators in detail.
Rateless Coding for MIMO Block Fading Channels
Fan, Yijia; Erkip, Elza; Poor, H Vincent
2008-01-01
In this paper the performance limits and design principles of rateless codes over fading channels are studied. The diversity-multiplexing tradeoff (DMT) is used to analyze the system performance for all possible transmission rates. It is revealed from the analysis that the design of such rateless codes follows the design principle of approximately universal codes for parallel multiple-input multiple-output (MIMO) channels, in which each sub-channel is a MIMO channel. More specifically, it is shown that for a single-input single-output (SISO) channel, the previously developed permutation codes of unit length for parallel channels having rate LR can be transformed directly into rateless codes of length L having multiple rate levels (R, 2R, . . ., LR), to achieve the DMT performance limit.
Topological Code Architectures for Quantum Computation
Cesare, Christopher Anthony
This dissertation is concerned with quantum computation using many-body quantum systems encoded in topological codes. The interest in these topological systems has increased in recent years as devices in the lab begin to reach the fidelities required for performing arbitrarily long quantum algorithms. The most well-studied system, Kitaev's toric code, provides both a physical substrate for performing universal fault-tolerant quantum computations and a useful pedagogical tool for explaining the way other topological codes work. In this dissertation, I first review the necessary formalism for quantum information and quantum stabilizer codes, and then I introduce two families of topological codes: Kitaev's toric code and Bombin's color codes. I then present three chapters of original work. First, I explore the distinctness of encoding schemes in the color codes. Second, I introduce a model of quantum computation based on the toric code that uses adiabatic interpolations between static Hamiltonians with gaps constant in the system size. Lastly, I describe novel state distillation protocols that are naturally suited for topological architectures and show that they provide resource savings in terms of the number of required ancilla states when compared to more traditional approaches to quantum gate approximation.
M. Markkanen; Vierinen, J.; Markkanen, J.
2007-01-01
We present a new class of alternating codes. Instead of the customary binary phase codes, the new codes utilize either p or p–1 phases, where p is a prime number. The first class of codes has code length p^{m}, where m is a positive integer, the second class has code length p–1. We give an actual construction algorithm, and explain the principles behind it. We ...
Construction of Codes for Network Coding
Elsenhans, Andreas-Stephan; Wassermann, Alfred
2010-01-01
Based on ideas of K\\"otter and Kschischang we use constant dimension subspaces as codewords in a network. We show a connection to the theory of q-analogues of a combinatorial designs, which has been studied in Braun, Kerber and Laue as a purely combinatorial object. For the construction of network codes we successfully modified methods (construction with prescribed automorphisms) originally developed for the q-analogues of a combinatorial designs. We then give a special case of that method which allows the construction of network codes with a very large ambient space and we also show how to decode such codes with a very small number of operations.
National Oceanic and Atmospheric Administration, Department of Commerce — Coded items are entered in the tiponline data entry program. The codes and their explanations are necessary in order to use the data
Gabrys, Ryan; Milenkovic, Olgica
2016-01-01
Motivated by charge balancing constraints for rank modulation schemes, we introduce the notion of balanced permutations and derive the capacity of balanced permutation codes. We also describe simple interleaving methods for permutation code constructions and show that they approach capacity
This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)
Benchmarking the starting points of the GW approximation for molecules
The GW approximation is nowadays being used to obtain accurate quasiparticle energies of atoms and molecules. In practice, the GW approximation is generally evaluated perturbatively, based on a prior self-consistent calculation within a simpler approximation. The final result thus depends on the choice of the self-consistent mean-field chosen as a starting point. Using a recently developed GW code based on Gaussian basis functions, we benchmark a wide range of starting points for perturbative GW, including Hartree-Fock, LDA, PBE, PBE0, B3LYP, HSE06, BH and HLYP, CAM-B3LYP, and tuned CAM-B3LYP. In the evaluation of the ionization energy, the hybrid functionals are clearly superior results starting points when compared to Hartree-Fock, to LDA, or to the semi local approximations. Furthermore, among the hybrid functionals, the ones with the highest proportion of exact-exchange usually perform best. Finally, the reliability of the frozen-core approximation, that allows for a considerable speedup of the calculations, is demonstrated. (authors)
On the TTB approximation for photon transport in MCNP
Three dimensional and continuous energy monte carlo code system, MCNP 4 deals with electron transport in addition to neutron and gamma-ray transport. Benchmark experiments involved bremsstrahlung of secondary electron are analyzed by the code MCNP 4, in the following three cases: (1) without approximation for electron pair production, (2) with the TTB approximation (thick-target-bremsstrahlung) for electron pair production, and (3) with secondary electron transport. Bishop et al. measured photon spectrum of gamma-ray (6.1Mev) which is emitted from N-16 in reactor coolant, and penetrating through iron and lead. Johnson et al. measured scattering photon spectrum and doses of capture gamma-ray (∼8Mev) which is emitted from titan and nickel, and penetrating through iron, concrete and lead. Calculation results of MCNP 4 with the secondary electron transport give good agreement with the measured values obtained by these two benchmark experiments, although the TTB approximation calculations overestimate in penetration problem, and underestimate in backscattering problem. (M. Suetake)
Kubica, Aleksander; Yoshida, Beni; Pastawski, Fernando
2015-01-01
The topological color code and the toric code are two leading candidates for realizing fault-tolerant quantum computation. Here we show that the color code on a $d$-dimensional closed manifold is equivalent to multiple decoupled copies of the $d$-dimensional toric code up to local unitary transformations and adding or removing ancilla qubits. Our result not only generalizes the proven equivalence for $d=2$, but also provides an explicit recipe of how to decouple independent components of the ...
Bergstra, J. A.
2010-01-01
General definitions as well as rules of reasoning regarding control code production, distribution, deployment, and usage are described. The role of testing, trust, confidence and risk analysis is considered. A rationale for control code testing is sought and found for the case of safety critical embedded control code.
Bergstra, Jan A.
2010-01-01
General definitions as well as rules of reasoning regarding control code production, distribution, deployment, and usage are described. The role of testing, trust, confidence and risk analysis is considered. A rationale for control code testing is sought and found for the case of safety critical embedded control code.
Bombin Palomo, Hector
2015-01-01
Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow the...
ARC Code TI: CODE Software Framework
National Aeronautics and Space Administration — CODE is a software framework for control and observation in distributed environments. The basic functionality of the framework allows a user to observe a...
ARC Code TI: ROC Curve Code Augmentation
National Aeronautics and Space Administration — ROC (Receiver Operating Characteristic) curve Code Augmentation was written by Rodney Martin and John Stutz at NASA Ames Research Center and is a modification of...
XIONG ChengYi; TIAN JinWen; LIU Jian
2008-01-01
This paper introduced a novel high performance algorithm and VLSI architectures for achieving bit plane coding (BPC) in word level sequential and parallel mode. The proposed BPC algorithm adopts the techniques of coding pass prediction and par-allel & pipeline to reduce the number of accessing memory and to increase the ability of concurrently processing of the system, where all the coefficient bits of a code block could be coded by only one scan. A new parallel bit plane architecture (PA) was proposed to achieve word-level sequential coding. Moreover, an efficient high-speed architecture (HA) was presented to achieve multi-word parallel coding. Compared to the state of the art, the proposed PA could reduce the hardware cost more efficiently, though the throughput retains one coefficient coded per clock. While the proposed HA could perform coding for 4 coefficients belonging to a stripe column at one intra-clock cycle, so that coding for an N×N code-block could be completed in approximate N2/4 intra-clock cycles. Theoretical analysis and ex-perimental results demonstrate that the proposed designs have high throughput rate with good performance in terms of speedup to cost, which can be good alter-natives for low power applications.
Spike Code Flow in Cultured Neuronal Networks.
Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei
2016-01-01
We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network. PMID:27217825
Generic programming for deterministic neutron transport codes
This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)