WorldWideScience

Sample records for block truncation algorithm

  1. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  2. Error bounds for augmented truncations of discrete-time block-monotone Markov chains under subgeometric drift conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2015-01-01

    This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...

  3. Phase retrieval via incremental truncated amplitude flow algorithm

    Science.gov (United States)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  4. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    International Nuclear Information System (INIS)

    Chen, Ming; Yu, Hengyong

    2015-01-01

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units

  5. A Line Search Multilevel Truncated Newton Algorithm for Computing the Optical Flow

    Directory of Open Access Journals (Sweden)

    Lluís Garrido

    2015-06-01

    Full Text Available We describe the implementation details and give the experimental results of three optimization algorithms for dense optical flow computation. In particular, using a line search strategy, we evaluate the performance of the unilevel truncated Newton method (LSTN, a multiresolution truncated Newton (MR/LSTN and a full multigrid truncated Newton (FMG/LSTN. We use three image sequences and four models of optical flow for performance evaluation. The FMG/LSTN algorithm is shown to lead to better optical flow estimation with less computational work than both the LSTN and MR/LSTN algorithms.

  6. Testing block subdivision algorithms on block designs

    Science.gov (United States)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  7. Performance Comparison of Assorted Color Spaces for Multilevel Block Truncation Coding based Face Recognition

    OpenAIRE

    H.B. Kekre; Sudeep Thepade; Karan Dhamejani; Sanchit Khandelwal; Adnan Azmi

    2012-01-01

    The paper presents a performance analysis of Multilevel Block Truncation Coding based Face Recognition among widely used color spaces. In [1], Multilevel Block Truncation Coding was applied on the RGB color space up to four levels for face recognition. Better results were obtained when the proposed technique was implemented using Kekre’s LUV (K’LUV) color space [25]. This was the motivation to test the proposed technique using assorted color spaces. For experimental analysis, two face databas...

  8. Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2014-01-01

    In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...

  9. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  10. Exact fan-beam image reconstruction algorithm for truncated projection data acquired from an asymmetric half-size detector

    International Nuclear Information System (INIS)

    Leng Shuai; Zhuang Tingliang; Nett, Brian E; Chen Guanghong

    2005-01-01

    In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2π angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2π angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm

  11. Minimum decoding trellis length and truncation depth of wrap-around Viterbi algorithm for TBCC in mobile WiMAX

    Directory of Open Access Journals (Sweden)

    Liu Yu-Sun

    2011-01-01

    Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.

  12. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    Science.gov (United States)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  13. Algorithmic impediments filtration using the α-truncated mean method in resolver-to-digital converter

    Directory of Open Access Journals (Sweden)

    Gordiyenko V. I.

    2009-02-01

    Full Text Available A test diagram of the microcontroller-type resolver-to-digital converter and algorithms for impediments filtration therein are developed. Experimental verification of the α-truncated mean algorithm intended for the suppression of impulse and noise interference is conducted. The test results are given.

  14. An efficient, block-by-block algorithm for inverting a block tridiagonal, nearly block Toeplitz matrix

    International Nuclear Information System (INIS)

    Reuter, Matthew G; Hill, Judith C

    2012-01-01

    We present an algorithm for computing any block of the inverse of a block tridiagonal, nearly block Toeplitz matrix (defined as a block tridiagonal matrix with a small number of deviations from the purely block Toeplitz structure). By exploiting both the block tridiagonal and the nearly block Toeplitz structures, this method scales independently of the total number of blocks in the matrix and linearly with the number of deviations. Numerical studies demonstrate this scaling and the advantages of our method over alternatives.

  15. Zero-block mode decision algorithm for H.264/AVC.

    Science.gov (United States)

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  16. Applications of Fast Truncated Multiplication in Cryptography

    Directory of Open Access Journals (Sweden)

    Laszlo Hars

    2006-12-01

    Full Text Available Truncated multiplications compute truncated products, contiguous subsequences of the digits of integer products. For an n-digit multiplication algorithm of time complexity O(nα, with 1<α≤2, there is a truncated multiplication algorithm, which is constant times faster when computing a short enough truncated product. Applying these fast truncated multiplications, several cryptographic long integer arithmetic algorithms are improved, including integer reciprocals, divisions, Barrett and Montgomery multiplications, 2n-digit modular multiplication on hardware for n-digit half products. For example, Montgomery multiplication is performed in 2.6 Karatsuba multiplication time.

  17. Truncation correction for oblique filtering lines

    International Nuclear Information System (INIS)

    Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Guenter; Dennerlein, Frank; Noo, Frederic

    2008-01-01

    State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.

  18. Experimental scheme and restoration algorithm of block compression sensing

    Science.gov (United States)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  19. Cross-layer designed adaptive modulation algorithm with packet combining and truncated ARQ over MIMO Nakagami fading channels

    KAUST Repository

    Aniba, Ghassane

    2011-04-01

    This paper presents an optimal adaptive modulation (AM) algorithm designed using a cross-layer approach which combines truncated automatic repeat request (ARQ) protocol and packet combining. Transmissions are performed over multiple-input multiple-output (MIMO) Nakagami fading channels, and retransmitted packets are not necessarily modulated using the same modulation format as in the initial transmission. Compared to traditional approach, cross-layer design based on the coupling across the physical and link layers, has proven to yield better performance in wireless communications. However, there is a lack for the performance analysis and evaluation of such design when the ARQ protocol is used in conjunction with packet combining. Indeed, previous works addressed the link layer performance of AM with truncated ARQ but without packet combining. In addition, previously proposed AM algorithms are not optimal and can provide poor performance when packet combining is implemented. Herein, we first show that the packet loss rate (PLR) resulting from the combining of packets modulated with different constellations can be well approximated by an exponential function. This model is then used in the design of an optimal AM algorithm for systems employing packet combining, truncated ARQ and MIMO antenna configurations, considering transmission over Nakagami fading channels. Numerical results are provided for operation with or without packet combining, and show the enhanced performance and efficiency of the proposed algorithm in comparison with existing ones. © 2011 IEEE.

  20. Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    T. Panigrahi

    2012-01-01

    Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.

  1. A study of block algorithms for fermion matrix inversion

    International Nuclear Information System (INIS)

    Henty, D.

    1990-01-01

    We compare the convergence properties of Lanczos and Conjugate Gradient algorithms applied to the calculation of columns of the inverse fermion matrix for Kogut-Susskind and Wilson fermions in lattice QCD. When several columns of the inverse are required simultaneously, a block version of the Lanczos algorithm is most efficient at small mass, being over 5 times faster than the single algorithms. The block algorithm is also less susceptible to critical slowing down. (orig.)

  2. The Combination of RSA And Block Chiper Algorithms To Maintain Message Authentication

    Science.gov (United States)

    Yanti Tarigan, Sepri; Sartika Ginting, Dewi; Lumban Gaol, Melva; Lorensi Sitompul, Kristin

    2017-12-01

    RSA algorithm is public key algorithm using prime number and even still used today. The strength of this algorithm lies in the exponential process, and the factorial number into 2 prime numbers which until now difficult to do factoring. The RSA scheme itself adopts the block cipher scheme, where prior to encryption, the existing plaintext is divide in several block of the same length, where the plaintext and ciphertext are integers between 1 to n, where n is typically 1024 bit, and the block length itself is smaller or equal to log(n)+1 with base 2. With the combination of RSA algorithm and block chiper it is expected that the authentication of plaintext is secure. The secured message will be encrypted with RSA algorithm first and will be encrypted again using block chiper. And conversely, the chipertext will be decrypted with the block chiper first and decrypted again with the RSA algorithm. This paper suggests a combination of RSA algorithms and block chiper to secure data.

  3. FBCOT: a fast block coding option for JPEG 2000

    Science.gov (United States)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically Coding with Optimized Truncation).

  4. Truncated Groebner fans and lattice ideals

    OpenAIRE

    Lauritzen, Niels

    2005-01-01

    We outline a generalization of the Groebner fan of a homogeneous ideal with maximal cells parametrizing truncated Groebner bases. This "truncated" Groebner fan is usually much smaller than the full Groebner fan and offers the natural framework for conversion between truncated Groebner bases. The generic Groebner walk generalizes naturally to this setting by using the Buchberger algorithm with truncation on facets. We specialize to the setting of lattice ideals. Here facets along the generic w...

  5. Cross-layer designed adaptive modulation algorithm with packet combining and truncated ARQ over MIMO Nakagami fading channels

    KAUST Repository

    Aniba, Ghassane; Aissa, Sonia

    2011-01-01

    works addressed the link layer performance of AM with truncated ARQ but without packet combining. In addition, previously proposed AM algorithms are not optimal and can provide poor performance when packet combining is implemented. Herein, we first show

  6. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  7. New results to BDD truncation method for efficient top event probability calculation

    International Nuclear Information System (INIS)

    Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang

    2012-01-01

    A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.

  8. Ship Block Transportation Scheduling Problem Based on Greedy Algorithm

    Directory of Open Access Journals (Sweden)

    Chong Wang

    2016-05-01

    Full Text Available Ship block transportation problems are crucial issues to address in reducing the construction cost and improving the productivity of shipyards. Shipyards aim to maximize the workload balance of transporters with time constraint such that all blocks should be transported during the planning horizon. This process leads to three types of penalty time: empty transporter travel time, delay time, and tardy time. This study aims to minimize the sum of the penalty time. First, this study presents the problem of ship block transportation with the generalization of the block transportation restriction on the multi-type transporter. Second, the problem is transformed into the classical traveling salesman problem and assignment problem through a reasonable model simplification and by adding a virtual node to the proposed directed graph. Then, a heuristic algorithm based on greedy algorithm is proposed to assign blocks to available transporters and sequencing blocks for each transporter simultaneously. Finally, the numerical experiment method is used to validate the model, and its result shows that the proposed algorithm is effective in realizing the efficient use of the transporters in shipyards. Numerical simulation results demonstrate the promising application of the proposed method to efficiently improve the utilization of transporters and to reduce the cost of ship block logistics for shipyards.

  9. Prevalence of E/A wave fusion and A wave truncation in DDD pacemaker patients with complete AV block under nominal AV intervals.

    Directory of Open Access Journals (Sweden)

    Wolfram C Poller

    Full Text Available Optimization of the AV-interval (AVI in DDD pacemakers improves cardiac hemodynamics and reduces pacemaker syndromes. Manual optimization is typically not performed in clinical routine. In the present study we analyze the prevalence of E/A wave fusion and A wave truncation under resting conditions in 160 patients with complete AV block (AVB under the pre-programmed AVI. We manually optimized sub-optimal AVI.We analyzed 160 pacemaker patients with complete AVB, both in sinus rhythm (AV-sense; n = 129 and under atrial pacing (AV-pace; n = 31. Using Doppler analyses of the transmitral inflow we classified the nominal AVI as: a normal, b too long (E/A wave fusion or c too short (A wave truncation. In patients with a sub-optimal AVI, we performed manual optimization according to the recommendations of the American Society of Echocardiography.All AVB patients with atrial pacing exhibited a normal transmitral inflow under the nominal AV-pace intervals (100%. In contrast, 25 AVB patients in sinus rhythm showed E/A wave fusion under the pre-programmed AV-sense intervals (19.4%; 95% confidence interval (CI: 12.6-26.2%. A wave truncations were not observed in any patient. All patients with a complete E/A wave fusion achieved a normal transmitral inflow after AV-sense interval reduction (mean optimized AVI: 79.4 ± 13.6 ms.Given the rate of 19.4% (CI 12.6-26.2% of patients with a too long nominal AV-sense interval, automatic algorithms may prove useful in improving cardiac hemodynamics, especially in the subgroup of atrially triggered pacemaker patients with AV node diseases.

  10. MULTISTAGE BITRATE REDUCTION IN ABSOLUTE MOMENT BLOCK TRUNCATION CODING FOR IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Vimala

    2012-05-01

    Full Text Available Absolute Moment Block Truncation Coding (AMBTC is one of the lossy image compression techniques. The computational complexity involved is less and the quality of the reconstructed images is appreciable. The normal AMBTC method requires 2 bits per pixel (bpp. In this paper, two novel ideas have been incorporated as part of AMBTC method to improve the coding efficiency. Generally, the quality degrades with the reduction in the bit-rate. But in the proposed method, the quality of the reconstructed image increases with the decrease in the bit-rate. The proposed method has been tested with standard images like Lena, Barbara, Bridge, Boats and Cameraman. The results obtained are better than that of the existing AMBTC method in terms of bit-rate and the quality of the reconstructed images.

  11. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    Science.gov (United States)

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  12. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    Science.gov (United States)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  13. Approximate truncation robust computed tomography—ATRACT

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Maier, Andreas

    2013-01-01

    We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)

  14. Balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2013-01-01

    In this paper, we present a theoretical analysis of the model reduction algorithm for linear switched systems from Shaker and Wisniewski (2011, 2009) and . This algorithm is a reminiscence of the balanced truncation method for linear parameter varying systems (Wood et al., 1996) [3]. Specifically...

  15. Conditional truncated plurigaussian simulation; Simulacao plurigaussiana truncada com condicionamento

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Vitor Hugo

    1997-12-01

    The goal of this work was a development of an algorithm for the Truncated Plurigaussian Stochastic Simulation and its validation in a complex geologic model. The reservoir data comes from Aux Vases Zone at Rural Hill Field in Illinois, USA, and from the 2D geological interpretation, described by WEIMER et al. (1982), three sets of samples, with different grid densities ware taken. These sets were used to condition the simulation and to refine the estimates of the non-stationary matrix of facies proportions, used to truncate the gaussian random functions (RF). The Truncated Plurigaussian Model is an extension of the Truncated Gaussian Model (TG). In this new model its possible to use several facies with different spatial structures, associated with the simplicity of TG. The geological interpretation, used as a validation model, was chosen because it shows a set of NW/SE elongated tidal channels cutting the NE/SW shoreline deposits interleaved by impermeable facies. These characteristics of spatial structures of sedimentary facies served to evaluate the simulation model. Two independent gaussian RF were used, as well as an 'erosive model' as the truncation strategy. Also, non-conditional simulations were proceeded, using linearly combined gaussian RF with varying correlation coefficients. It was analyzed the influence of some parameters like: number of gaussian RF,correlation coefficient, truncations strategy, in the outcome of simulation, and also the physical meaning of these parameters under a geological point of view. It was showed, step by step, using an example, the theoretical model, and how to construct an algorithm to simulate with the Truncated Plurigaussian Model. The conclusion of this work was that even with a plain algorithm of the Conditional Truncated Plurigaussian and a complex geological model it's possible to obtain a usefulness product. (author)

  16. A New Block Processing Algorithm of LLL for Fast High-dimension Ambiguity Resolution

    Directory of Open Access Journals (Sweden)

    LIU Wanke

    2016-02-01

    Full Text Available Due to high dimension and precision for the ambiguity vector under GNSS observations of multi-frequency and multi-system, a major problem to limit computational efficiency of ambiguity resolution is the longer reduction time when using conventional LLL algorithm. To address this problem, it is proposed a new block processing algorithm of LLL by analyzing the relationship between the reduction time and the dimensions and precision of ambiguity. The new algorithm reduces the reduction time to improve computational efficiency of ambiguity resolution, which is based on block processing ambiguity variance-covariance matrix that decreased the dimensions of single reduction matrix. It is validated that the new algorithm with two groups of measured data. The results show that the computing efficiency of the new algorithm increased by 65.2% and 60.2% respectively compared with that of LLL algorithm when choosing a reasonable number of blocks.

  17. Algorithmic detectability threshold of the stochastic block model

    Science.gov (United States)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  18. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  19. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  20. A retrospective view on 'algorithms for radiative intensity calculations in moderately thick atmospheres using a truncation approximation' by Teruyuki Nakajima and Masayuki Tanaka (1988)

    International Nuclear Information System (INIS)

    Nakajima, Teruyuki

    2010-01-01

    I explain the motivation behind our paper 'Algorithms for radiative intensity calculations in moderately thick atmospheres using a truncation approximation' (JQSRT 1988;40:51-69) and discuss our results in a broader historical context.

  1. A Residual Approach for Balanced Truncation Model Reduction (BTMR of Compartmental Systems

    Directory of Open Access Journals (Sweden)

    William La Cruz

    2014-05-01

    Full Text Available This paper presents a residual approach of the square root balanced truncation algorithm for model order reduction of continuous, linear and time-invariante compartmental systems. Specifically, the new approach uses a residual method to approximate the controllability and observability gramians, whose resolution is an essential step of the square root balanced truncation algorithm, that requires a great computational cost. Numerical experiences are included to highlight the efficacy of the proposed approach.

  2. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    Science.gov (United States)

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  3. A fast image encryption algorithm based on only blocks in cipher text

    Science.gov (United States)

    Wang, Xing-Yuan; Wang, Qian

    2014-03-01

    In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks.

  4. A fast image encryption algorithm based on only blocks in cipher text

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Wang Qian

    2014-01-01

    In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks

  5. An efficient algorithm for removal of inactive blocks in reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Kassem, J.H.; Ertekin, T. (Pennsylvania State Univ., PA (United States))

    1992-02-01

    In the efficient simulation of reservoirs having irregular boundaries one is confronted with two problems: the removal of inactive blocks at the matrix level and the development and application of a variable band-width solver. A simple algorithm is presented that provides effective solutions to these two problems. The algorithm is demonstrated for both the natural ordering and D4 ordering schemes. It can be easily incorporated in existing simulators and results in significant savings in CPU and matrix storage requirements. The removal of the inactive blocks at the matrix level plays a major role in effecting these savings whereas the application of a variable band-width solver plays an enhancing role only. The value of this algorithm lies in the fact that it takes advantage of irregular reservoir boundaries that are invariably encountered in almost all practical applications of reservoir simulation. 11 refs., 3 figs., 3 tabs.

  6. An compression algorithm for medical images and a display with the decoding function

    International Nuclear Information System (INIS)

    Gotoh, Toshiyuki; Nakagawa, Yukihiro; Shiohara, Morito; Yoshida, Masumi

    1990-01-01

    This paper describes and efficient image compression method for medical images, a high-speed display with the decoding function. In our method, an input image is divided into blocks, and either of Discrete Cosine Transform coding (DCT) or Block Truncation Coding (BTC) is adaptively applied on each block to improve image quality. The display, we developed, receives the compressed data from the host computer and reconstruct images of good quality at high speed using four decoding microprocessors on which our algorithm is implemented in pipeline. By the experiments, our method and display were verified to be effective. (author)

  7. Projective block Lanczos algorithm for dense, Hermitian eigensystems

    International Nuclear Information System (INIS)

    Webster, F.; Lo, G.C.

    1996-01-01

    Projection operators are used to effect open-quotes deflation by restrictionclose quotes and it is argued that this is an optimal Lanczos algorithm for memory minimization. Algorithmic optimization is constrained to dense, Hermitian eigensystems where a significant number of the extreme eigenvectors must be obtained reliably and completely. The defining constraints are operator algebra without a matrix representation and semi-orthogonalization without storage of Krylov vectors. other semi-orthogonalization strategies for Lanczos algorithms and conjugate gradient techniques are evaluated within these constraints. Large scale, sparse, complex numerical experiments are performed on clusters of magnetic dipoles, a quantum many-body system that is not block-diagonalizable. Plane-wave, density functional theory of beryllium clusters provides examples of dense complex eigensystems. Use of preconditioners and spectral transformations is evaluated in a preprocessor prior to a high accuracy self-consistent field calculation. 25 refs., 3 figs., 5 tabs

  8. Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-01-01

    Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.

  9. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  10. An efficient algorithm for sorting by block-interchanges and its application to the evolution of vibrio species.

    Science.gov (United States)

    Lin, Ying Chih; Lu, Chin Lung; Chang, Hwan-You; Tang, Chuan Yi

    2005-01-01

    In the study of genome rearrangement, the block-interchanges have been proposed recently as a new kind of global rearrangement events affecting a genome by swapping two nonintersecting segments of any length. The so-called block-interchange distance problem, which is equivalent to the sorting-by-block-interchange problem, is to find a minimum series of block-interchanges for transforming one chromosome into another. In this paper, we study this problem by considering the circular chromosomes and propose a Omicron(deltan) time algorithm for solving it by making use of permutation groups in algebra, where n is the length of the circular chromosome and delta is the minimum number of block-interchanges required for the transformation, which can be calculated in Omicron(n) time in advance. Moreover, we obtain analogous results by extending our algorithm to linear chromosomes. Finally, we have implemented our algorithm and applied it to the circular genomic sequences of three human vibrio pathogens for predicting their evolutionary relationships. Consequently, our experimental results coincide with the previous ones obtained by others using a different comparative genomics approach, which implies that the block-interchange events seem to play a significant role in the evolution of vibrio species.

  11. An O([Formula: see text]) algorithm for sorting signed genomes by reversals, transpositions, transreversals and block-interchanges.

    Science.gov (United States)

    Yu, Shuzhi; Hao, Fanchang; Leong, Hon Wai

    2016-02-01

    We consider the problem of sorting signed permutations by reversals, transpositions, transreversals, and block-interchanges. The problem arises in the study of species evolution via large-scale genome rearrangement operations. Recently, Hao et al. gave a 2-approximation scheme called genome sorting by bridges (GSB) for solving this problem. Their result extended and unified the results of (i) He and Chen - a 2-approximation algorithm allowing reversals, transpositions, and block-interchanges (by also allowing transversals) and (ii) Hartman and Sharan - a 1.5-approximation algorithm allowing reversals, transpositions, and transversals (by also allowing block-interchanges). The GSB result is based on introduction of three bridge structures in the breakpoint graph, the L-bridge, T-bridge, and X-bridge that models goodreversal, transposition/transreversal, and block-interchange, respectively. However, the paper by Hao et al. focused on proving the 2-approximation GSB scheme and only mention a straightforward [Formula: see text] algorithm. In this paper, we give an [Formula: see text] algorithm for implementing the GSB scheme. The key idea behind our faster GSB algorithm is to represent cycles in the breakpoint graph by their canonical sequences, which greatly simplifies the search for these bridge structures. We also give some comparison results (running time and computed distances) against the original GSB implementation.

  12. Timing of the Cenozoic basins of Southern Mexico and its relationship with the Pacific truncation process: Subduction erosion or detachment of the Chortís block

    Science.gov (United States)

    Silva-Romo, Gilberto; Mendoza-Rosales, Claudia Cristina; Campos-Madrigal, Emiliano; Hernández-Marmolejo, Yoalli Bianii; de la Rosa-Mora, Orestes Antonio; de la Torre-González, Alam Israel; Bonifacio-Serralde, Carlos; López-García, Nallely; Nápoles-Valenzuela, Juan Ivan

    2018-04-01

    In the central sector of the Sierra Madre del Sur in Southern Mexico, between approximately 36 and 16 Ma ago and in the west to east direction, a diachronic process of the formation of ∼north-south trending fault-bounded basins occurred. No tectono-sedimentary event in the period between 25 and 20 Ma is recognized in the study region. A period during which subduction erosion truncated the continental crust of southern Mexico has been proposed. The chronology, geometry and style of the formation of the Eocene Miocene fault-bounded basins are more congruent with crustal truncation by the detachment of the Chortís block, thus bringing into question the crustal truncation hypothesis of the Southern Mexico margin. Between Taxco and Tehuacán, using seven new Laser Ablation- Inductively-coupled plasma mass spectrometry (LA-ICP-MS) U-Pb ages in magmatic zircons, we refine the stratigraphy of the Tepenene, Tehuitzingo, Atzumba and Tepelmeme basins. The analyzed basins present similar tectono-sedimentary evolutions as follows: Stage 1, depocenter formation and filling by clastic rocks accumulated as alluvial fans and Stage 2, lacustrine sedimentation characterized by calcareous and/or evaporite beds. Based on our results, we propose the following hypothesis: in Southern Mexico, during Eocene-Miocene times, the diachronic formation of fault-bounded basins with general north-south trend occurred within the framework of the convergence between the plates of North and South America, and once the Chortís block had slipped towards the east, the basins formed in the cortical crust were recently left behind. On the other hand, the beginning of the basins' formation process related to left strike slip faults during Eocene-Oligocene times can be associated with the thermomechanical maturation cortical process that caused the brittle/ductile transition level in the continental crust to shallow.

  13. Data Back-Up and Recovery Techniques for Cloud Server Using Seed Block Algorithm

    OpenAIRE

    R. V. Gandhi; M Seshaiah

    2015-01-01

    In cloud computing, data generated in electronic form are large in amount. To maintain this data efficiently, there is a necessity of data recovery services. To cater this, we propose a smart remote data backup algorithm, Seed Block Algorithm. The objective of proposed algorithm is twofold; first it help the users to collect information from any remote location in the absence of network connectivity and second to recover the files in case of the file deletion or if the cloud gets ...

  14. A block matching-based registration algorithm for localization of locally advanced lung tumors

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D., E-mail: gdhugo@vcu.edu [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia, 23298 (United States)

    2014-04-15

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm{sup 3}), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0

  15. A block matching-based registration algorithm for localization of locally advanced lung tumors

    International Nuclear Information System (INIS)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2014-01-01

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm 3 ), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0.001). Left

  16. Analysis of Block OMP using Block RIP

    OpenAIRE

    Wang, Jun; Li, Gang; Zhang, Hao; Wang, Xiqin

    2011-01-01

    Orthogonal matching pursuit (OMP) is a canonical greedy algorithm for sparse signal reconstruction. When the signal of interest is block sparse, i.e., it has nonzero coefficients occurring in clusters, the block version of OMP algorithm (i.e., Block OMP) outperforms the conventional OMP. In this paper, we demonstrate that a new notion of block restricted isometry property (Block RIP), which is less stringent than standard restricted isometry property (RIP), can be used for a very straightforw...

  17. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  18. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  19. Reduction of Truncation Errors in Planar Near-Field Aperture Antenna Measurements Using the Gerchberg-Papoulis Algorithm

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2008-01-01

    A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna is...

  20. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  1. A Post-Truncation Parameterization of Truncated Normal Technical Inefficiency

    OpenAIRE

    Christine Amsler; Peter Schmidt; Wen-Jen Tsay

    2013-01-01

    In this paper we consider a stochastic frontier model in which the distribution of technical inefficiency is truncated normal. In standard notation, technical inefficiency u is distributed as N^+ (μ,σ^2). This distribution is affected by some environmental variables z that may or may not affect the level of the frontier but that do affect the shortfall of output from the frontier. We will distinguish the pre-truncation mean (μ) and variance (σ^2) from the post-truncation mean μ_*=E(u) and var...

  2. How Truncating Are 'Truncating Languages'? Evidence from Russian and German.

    Science.gov (United States)

    Rathcke, Tamara V

    Russian and German have pr eviously been described as 'truncating', or cutting off target frequencies of the phrase-final pitch trajectories when the time available for voicing is compromised. However, supporting evidence is rare and limited to only a few pitch categories. This paper reports a production study conducted to document pitch adjustments to linguistic materials, in which the amount of voicing available for the realization of a pitch pattern varies from relatively long to extremely short. Productions of nuclear H+L*, H* and L*+H pitch accents followed by a low boundary tone were investigated in the two languages. The results of the study show that speakers of both 'truncating languages' do not utilize truncation exclusively when accommodating to different segmental environments. On the contrary, they employ several strategies - among them is truncation but also compression and temporal re-alignment - to produce the target pitch categories under increasing time pressure. Given that speakers can systematically apply all three adjustment strategies to produce some pitch patterns (H* L% in German and Russian) while not using truncation in others (H+L* L% particularly in Russian), we question the effectiveness of the typological classification of these two languages as 'truncating'. Moreover, the phonetic detail of truncation varies considerably, both across and within the two languages, indicating that truncation cannot be easily modeled as a unified phenomenon. The results further suggest that the phrase-final pitch adjustments are sensitive to the phonological composition of the tonal string and the status of a particular tonal event (associated vs. boundary tone), and do not apply to falling vs. rising pitch contours across the board, as previously put forward for German. Implications for the intonational phonology and prosodic typology are addressed in the discussion. © 2017 S. Karger AG, Basel.

  3. Truncation artifact suppression in cone-beam radionuclide transmission CT using maximum likelihood techniques: evaluation with human subjects

    International Nuclear Information System (INIS)

    Manglos, S.H.

    1992-01-01

    Transverse image truncation can be a serious problem for human imaging using cone-beam transmission CT (CB-CT) implemented on a conventional rotating gamma camera. This paper presents a reconstruction method to reduce or eliminate the artifacts resulting from the truncation. The method uses a previously published transmission maximum likelihood EM algorithm, adapted to the cone-beam geometry. The reconstruction method is evaluated qualitatively using three human subjects of various dimensions and various degrees of truncation. (author)

  4. N-terminally truncated POM121C inhibits HIV-1 replication.

    Directory of Open Access Journals (Sweden)

    Hideki Saito

    Full Text Available Recent studies have identified host cell factors that regulate early stages of HIV-1 infection including viral cDNA synthesis and orientation of the HIV-1 capsid (CA core toward the nuclear envelope, but it remains unclear how viral DNA is imported through the nuclear pore and guided to the host chromosomal DNA. Here, we demonstrate that N-terminally truncated POM121C, a component of the nuclear pore complex, blocks HIV-1 infection. This truncated protein is predominantly localized in the cytoplasm, does not bind to CA, does not affect viral cDNA synthesis, reduces the formation of 2-LTR and diminished the amount of integrated proviral DNA. Studies with an HIV-1-murine leukemia virus (MLV chimeric virus carrying the MLV-derived Gag revealed that Gag is a determinant of this inhibition. Intriguingly, mutational studies have revealed that the blockade by N-terminally-truncated POM121C is closely linked to its binding to importin-β/karyopherin subunit beta 1 (KPNB1. These results indicate that N-terminally-truncated POM121C inhibits HIV-1 infection after completion of reverse transcription and before integration, and suggest an important role for KPNB1 in HIV-1 replication.

  5. Fully 3D PET image reconstruction using a fourier preconditioned conjugate-gradient algorithm

    International Nuclear Information System (INIS)

    Fessler, J.A.; Ficaro, E.P.

    1996-01-01

    Since the data sizes in fully 3D PET imaging are very large, iterative image reconstruction algorithms must converge in very few iterations to be useful. One can improve the convergence rate of the conjugate-gradient (CG) algorithm by incorporating preconditioning operators that approximate the inverse of the Hessian of the objective function. If the 3D cylindrical PET geometry were not truncated at the ends, then the Hessian of the penalized least-squares objective function would be approximately shift-invariant, i.e. G'G would be nearly block-circulant, where G is the system matrix. We propose a Fourier preconditioner based on this shift-invariant approximation to the Hessian. Results show that this preconditioner significantly accelerates the convergence of the CG algorithm with only a small increase in computation

  6. The effect of truncation on very small cardiac SPECT camera systems

    International Nuclear Information System (INIS)

    Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.

    2006-01-01

    Background: The limited transaxial field-of-view (FOV) of a very small cardiac SPECT camera system causes view-dependent truncation of the projection of structures exterior to, but near the heart. Basic tomographic principles suggest that the reconstruction of non-attenuated truncated data gives a distortion-free image in the interior of the truncated region, but the DC term of the Fourier spectrum of the reconstructed image is incorrect, meaning that the intensity scale of the reconstruction is inaccurate. The purpose of this study was to characterize the reconstructed image artifacts from truncated data, and to quantify their effects on the measurement of tracer uptake in the myocardial. Particular attention was given to instances where the heart wall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heart region. Truncated and non-truncated projections were formed both with and without attenuation. The reconstructions were analyzed for artifacts in the myocardium caused by truncation, and for the effect that attenuation has relative to increasing those artifacts. Results: The inaccuracy due to truncation is primarily caused by an incorrect DC component. For visualizing the left ventricular wall, this error is not worse than the effect of attenuation. The addition of a small hot bowel-like structure near the left ventricle causes few changes in counts on the wall. Larger artifacts due to the truncation are located at the boundary of the truncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstruction results than an analytical filtered back-projection reconstruction algorithm. Conclusion: Small inaccuracies in reconstructed images from small FOV camera systems should have little effect on clinical interpretation. However, changes in the degree of inaccuracy in counts from slice to slice are due to changes in

  7. Security Analysis of a Block Encryption Algorithm Based on Dynamic Sequences of Multiple Chaotic Systems

    Science.gov (United States)

    Du, Mao-Kang; He, Bo; Wang, Yong

    2011-01-01

    Recently, the cryptosystem based on chaos has attracted much attention. Wang and Yu (Commun. Nonlin. Sci. Numer. Simulat. 14 (2009) 574) proposed a block encryption algorithm based on dynamic sequences of multiple chaotic systems. We analyze the potential flaws in the algorithm. Then, a chosen-plaintext attack is presented. Some remedial measures are suggested to avoid the flaws effectively. Furthermore, an improved encryption algorithm is proposed to resist the attacks and to keep all the merits of the original cryptosystem.

  8. A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation

    Science.gov (United States)

    Li, Yue-e.; Wang, Qiang

    2011-11-01

    This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.

  9. Dual scan CT image recovery from truncated projections

    Science.gov (United States)

    Sarkar, Shubhabrata; Wahi, Pankaj; Munshi, Prabhat

    2017-12-01

    There are computerized tomography (CT) scanners available commercially for imaging small objects and they are often categorized as mini-CT X-ray machines. One major limitation of these machines is their inability to scan large objects with good image quality because of the truncation of projection data. An algorithm is proposed in this work which enables such machines to scan large objects while maintaining the quality of the recovered image.

  10. R Programs for Truncated Distributions

    Directory of Open Access Journals (Sweden)

    Saralees Nadarajah

    2006-08-01

    Full Text Available Truncated distributions arise naturally in many practical situations. In this note, we provide programs for computing six quantities of interest (probability density function, mean, variance, cumulative distribution function, quantile function and random numbers for any truncated distribution: whether it is left truncated, right truncated or doubly truncated. The programs are written in R: a freely downloadable statistical software.

  11. SU-G-BRA-02: Development of a Learning Based Block Matching Algorithm for Ultrasound Tracking in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, A; Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localized block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA

  12. SU-G-BRA-02: Development of a Learning Based Block Matching Algorithm for Ultrasound Tracking in Radiotherapy

    International Nuclear Information System (INIS)

    Shepard, A; Bednarz, B

    2016-01-01

    Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localized block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA

  13. Efficient Tridiagonal Preconditioner for the Matrix-Free Truncated Newton Method

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2014-01-01

    Roč. 235, 25 May (2014), s. 394-407 ISSN 0096-3003 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * matrix-free truncated Newton method * preconditioned conjugate gradient method * preconditioners obtained by the directional differentiation * numerical algorithms Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014

  14. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    Science.gov (United States)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  15. Zero-truncated negative binomial - Erlang distribution

    Science.gov (United States)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  16. Angular truncation errors in integrating nephelometry

    International Nuclear Information System (INIS)

    Moosmueller, Hans; Arnott, W. Patrick

    2003-01-01

    Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error

  17. Image Blocking Encryption Algorithm Based on Laser Chaos Synchronization

    Directory of Open Access Journals (Sweden)

    Shu-Ying Wang

    2016-01-01

    Full Text Available In view of the digital image transmission security, based on laser chaos synchronization and Arnold cat map, a novel image encryption scheme is proposed. Based on pixel values of plain image a parameter is generated to influence the secret key. Sequences of the drive system and response system are pretreated by the same method and make image blocking encryption scheme for plain image. Finally, pixels position are scrambled by general Arnold transformation. In decryption process, the chaotic synchronization accuracy is fully considered and the relationship between the effect of synchronization and decryption is analyzed, which has characteristics of high precision, higher efficiency, simplicity, flexibility, and better controllability. The experimental results show that the encryption algorithm image has high security and good antijamming performance.

  18. A fast BDD algorithm for large coherent fault trees analysis

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Han, Sang Hoon; Ha, Jaejoo

    2004-01-01

    Although a binary decision diagram (BDD) algorithm has been tried to solve large fault trees until quite recently, they are not efficiently solved in a short time since the size of a BDD structure exponentially increases according to the number of variables. Furthermore, the truncation of If-Then-Else (ITE) connectives by the probability or size limit and the subsuming to delete subsets could not be directly applied to the intermediate BDD structure under construction. This is the motivation for this work. This paper presents an efficient BDD algorithm for large coherent systems (coherent BDD algorithm) by which the truncation and subsuming could be performed in the progress of the construction of the BDD structure. A set of new formulae developed in this study for AND or OR operation between two ITE connectives of a coherent system makes it possible to delete subsets and truncate ITE connectives with a probability or size limit in the intermediate BDD structure under construction. By means of the truncation and subsuming in every step of the calculation, large fault trees for coherent systems (coherent fault trees) are efficiently solved in a short time using less memory. Furthermore, the coherent BDD algorithm from the aspect of the size of a BDD structure is much less sensitive to variable ordering than the conventional BDD algorithm

  19. Object Detection and Tracking using Modified Diamond Search Block Matching Motion Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Apurva Samdurkar

    2018-06-01

    Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.

  20. Tailor-made dimensions of diblock copolymer truncated micelles on a solid by UV irradiation.

    Science.gov (United States)

    Liou, Jiun-You; Sun, Ya-Sen

    2015-09-28

    We investigated the structural evolution of truncated micelles in ultrathin films of polystyrene-block-poly(2-vinylpyridine), PS-b-P2VP, of monolayer thickness on bare silicon substrates (SiOx/Si) upon UV irradiation in air- (UVIA) and nitrogen-rich (UVIN) environments. The structural evolution of micelles upon UV irradiation was monitored using GISAXS measurements in situ, while the surface morphology was probed using atomic force microscopy ex situ and the chemical composition using X-ray photoelectron spectroscopy (XPS). This work provides clear evidence for the interpretation of the relationship between the structural evolution and photochemical reactions in PS-b-P2VP truncated micelles upon UVIA and UVIN. Under UVIA treatment, photolysis and cross-linking reactions coexisted within the micelles; photolysis occurred mainly at the top of the micelles, whereas cross-linking occurred preferentially at the bottom. The shape and size of UVIA-treated truncated micelles were controlled predominantly by oxidative photolysis reactions, which depended on the concentration gradient of free radicals and oxygen along the micelle height. Because of an interplay between photolysis and photo-crosslinking, the scattering length densities (SLD) of PS and P2VP remained constant. In contrast, UVIN treatments enhanced the contrast in SLD between the PS shell and the P2VP core as cross-linking dominated over photolysis in the presence of nitrogen. The enhancement of the SLD contrast was due to the various degrees of cross-linking under UVIN for the PS and P2VP blocks.

  1. Truncatable bootstrap equations in algebraic form and critical surface exponents

    Energy Technology Data Exchange (ETDEWEB)

    Gliozzi, Ferdinando [Dipartimento di Fisica, Università di Torino andIstituto Nazionale di Fisica Nucleare - sezione di Torino,Via P. Giuria 1, Torino, I-10125 (Italy)

    2016-10-10

    We describe examples of drastic truncations of conformal bootstrap equations encoding much more information than that obtained by a direct numerical approach. A three-term truncation of the four point function of a free scalar in any space dimensions provides algebraic identities among conformal block derivatives which generate the exact spectrum of the infinitely many primary operators contributing to it. In boundary conformal field theories, we point out that the appearance of free parameters in the solutions of bootstrap equations is not an artifact of truncations, rather it reflects a physical property of permeable conformal interfaces which are described by the same equations. Surface transitions correspond to isolated points in the parameter space. We are able to locate them in the case of 3d Ising model, thanks to a useful algebraic form of 3d boundary bootstrap equations. It turns out that the low-lying spectra of the surface operators in the ordinary and the special transitions of 3d Ising model form two different solutions of the same polynomial equation. Their interplay yields an estimate of the surface renormalization group exponents, y{sub h}=0.72558(18) for the ordinary universality class and y{sub h}=1.646(2) for the special universality class, which compare well with the most recent Monte Carlo calculations. Estimates of other surface exponents as well as OPE coefficients are also obtained.

  2. Statistical estimation for truncated exponential families

    CERN Document Server

    Akahira, Masafumi

    2017-01-01

    This book presents new findings on nonregular statistical estimation. Unlike other books on this topic, its major emphasis is on helping readers understand the meaning and implications of both regularity and irregularity through a certain family of distributions. In particular, it focuses on a truncated exponential family of distributions with a natural parameter and truncation parameter as a typical nonregular family. This focus includes the (truncated) Pareto distribution, which is widely used in various fields such as finance, physics, hydrology, geology, astronomy, and other disciplines. The family is essential in that it links both regular and nonregular distributions, as it becomes a regular exponential family if the truncation parameter is known. The emphasis is on presenting new results on the maximum likelihood estimation of a natural parameter or truncation parameter if one of them is a nuisance parameter. In order to obtain more information on the truncation, the Bayesian approach is also considere...

  3. NLO renormalization in the Hamiltonian truncation

    Science.gov (United States)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-09-01

    Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.

  4. Stability of Slopes Reinforced with Truncated Piles

    Directory of Open Access Journals (Sweden)

    Shu-Wei Sun

    2016-01-01

    Full Text Available Piles are extensively used as a means of slope stabilization. A novel engineering technique of truncated piles that are unlike traditional piles is introduced in this paper. A simplified numerical method is proposed to analyze the stability of slopes stabilized with truncated piles based on the shear strength reduction method. The influential factors, which include pile diameter, pile spacing, depth of truncation, and existence of a weak layer, are systematically investigated from a practical point of view. The results show that an optimum ratio exists between the depth of truncation and the pile length above a slip surface, below which truncating behavior has no influence on the piled slope stability. This optimum ratio is bigger for slopes stabilized with more flexible piles and piles with larger spacing. Besides, truncated piles are more suitable for slopes with a thin weak layer than homogenous slopes. In practical engineering, the piles could be truncated reasonably while ensuring the reinforcement effect. The truncated part of piles can be filled with the surrounding soil and compacted to reduce costs by using fewer materials.

  5. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  6. Adaptation of the delta-m and δ-fit truncation methods to vector radiative transfer: Effect of truncation on radiative transfer accuracy

    International Nuclear Information System (INIS)

    Sanghavi, Suniti; Stephens, Graeme

    2015-01-01

    In the presence of aerosol and/or clouds, the use of appropriate truncation methods becomes indispensable for accurate but cost-efficient radiative transfer computations. Truncation methods allow the reduction of the large number (usually several hundreds) of Fourier components associated with particulate scattering functions to a more manageable number, thereby making it possible to carry out radiative transfer computations with a modest number of streams. While several truncation methods have been discussed for scalar radiative transfer, few rigorous studies have been made of truncation methods for the vector case. Here, we formally derive the vector form of Wiscombe's delta-m truncation method. Two main sources of error associated with delta-m truncation are identified as the delta-separation error (DSE) and the phase-truncation error (PTE). The view angles most affected by truncation error occur in the vicinity of the direction of exact backscatter. This view geometry occurs commonly in satellite based remote sensing applications, and is hence of considerable importance. In order to deal with these errors, we adapt the δ-fit approach of Hu et al. (2000) [17] to vector radiative transfer. The resulting δBGE-fit is compared with the vectorized delta-m method. For truncation at l=25 of an original phase matrix consisting of over 300 Fourier components, the use of the δBGE-fit minimizes the error due to truncation at these view angles, while practically eliminating error at other angles. We also show how truncation errors have a distorting effect on hyperspectral absorption line shapes. The choice of the δBGE-fit method over delta-m truncation minimizes errors in absorption line depths, thus affording greater accuracy for sensitive retrievals such as those of XCO 2 from OCO-2 or GOSAT measurements. - Highlights: • Derives vector form for delta-m truncation method. • Adapts δ-fit truncation approach to vector RTE as δBGE-fit. • Compares truncation

  7. Computing correct truncated excited state wavefunctions

    Science.gov (United States)

    Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.

    2016-12-01

    We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.

  8. Fast voxel and polygon ray-tracing algorithms in intensity modulated radiation therapy treatment planning

    International Nuclear Information System (INIS)

    Fox, Christopher; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique

  9. Mixtures of truncated basis functions

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2012-01-01

    In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for representing general hybrid Bayesian networks. The proposed framework generalizes both the mixture of truncated exponentials (MTEs) framework and the mixture of polynomials (MoPs) framework. Similar t...

  10. Bootstrapped efficiency measures of oil blocks in Angola

    International Nuclear Information System (INIS)

    Barros, C.P.; Assaf, A.

    2009-01-01

    This paper investigates the technical efficiency of Angola oil blocks over the period 2002-2007. A double bootstrap data envelopment analysis (DEA) model is adopted composed in the first stage of a DEA-variable returns to scale (VRS) model and then followed in the second stage by a bootstrapped truncated regression. Results showed that on average, the technical efficiency has fluctuated over the period of study, but deep and ultradeep oil blocks have generally maintained a consistent efficiency level. Policy implications are derived.

  11. Direct block scheduling technology: Analysis of Avidity

    Directory of Open Access Journals (Sweden)

    Felipe Ribeiro Souza

    Full Text Available Abstract This study is focused on Direct Block Scheduling testing (Direct Multi-Period Scheduling methodology which schedules mine production considering the correct discount factor of each mining block, resulting in the final pit. Each block is analyzed individually in order to define the best target period. This methodology presents an improvement of the classical methodology derived from Lerchs-Grossmann's initial proposition improved by Whittle. This paper presents the differences between these methodologies, specially focused on the algorithms' avidity. Avidity is classically defined by the voracious search algorithms, whereupon some of the most famous greedy algorithms are Branch and Bound, Brutal Force and Randomized. Strategies based on heuristics can accentuate the voracity of the optimizer system. The applied algorithm use simulated annealing combined with Tabu Search. The most avid algorithm can select the most profitable blocks in early periods, leading to higher present value in the first periods of mine operation. The application of discount factors to blocks on the Lerchs-Grossmann's final pit has an accentuated effect with time, and this effect may make blocks scheduled for the end of the mine life unfeasible, representing a trend to a decrease in reported reserves.

  12. Truncated Levy flights and agenda-based mobility are useful for the assessment of personal human exposure

    International Nuclear Information System (INIS)

    Schlink, Uwe; Ragas, Ad M.J.

    2011-01-01

    Receptor-oriented approaches can assess the individual-specific exposure to air pollution. In such an individual-based model we analyse the impact of human mobility to the personal exposure that is perceived by individuals simulated in an exemplified urban area. The mobility models comprise random walk (reference point mobility, RPM), truncated Levy flights (TLF), and agenda-based walk (RPMA). We describe and review the general concepts and provide an inter-comparison of these concepts. Stationary and ergodic behaviour are explained and applied as well as performance criteria for a comparative evaluation of the investigated algorithms. We find that none of the studied algorithm results in purely random trajectories. TLF and RPMA prove to be suitable for human mobility modelling, because they provide conditions for very individual-specific trajectories and exposure. Suggesting these models we demonstrate the plausibility of their results for exposure to air-borne benzene and the combined exposure to benzene and nonane. - Highlights: → Human exposure to air pollutants is influenced by a person's movement in the urban area. → We provide a simulation study of approaches to modelling personal exposure. → Agenda-based models and truncated Levy flights are recommended for exposure assessment. → The procedure is demonstrated for benzene exposure in an urban region. - Truncated Levy flights and agenda-based mobility are useful for the assessment of personal human exposure.

  13. Properties of truncated multiplicity distributions

    International Nuclear Information System (INIS)

    Lupia, S.

    1995-01-01

    Truncation effects on multiplicity distributions are discussed. Observables sensitive to the tail, like factorial moments, factorial cumulants and their ratio, are shown to be strongly affected by truncation. A possible way to overcome this problem by looking at the head of the distribution is suggested. (author)

  14. Properties of truncated multiplicity distributions

    Energy Technology Data Exchange (ETDEWEB)

    Lupia, S. [Turin Univ. (Italy). Dipt. di Fisica

    1995-12-31

    Truncation effects on multiplicity distributions are discussed. Observables sensitive to the tail, like factorial moments, factorial cumulants and their ratio, are shown to be strongly affected by truncation. A possible way to overcome this problem by looking at the head of the distribution is suggested. (author)

  15. Analysis of truncation limit in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Cepin, Marko

    2005-01-01

    A truncation limit defines the boundaries of what is considered in the probabilistic safety assessment and what is neglected. The truncation limit that is the focus here is the truncation limit on the size of the minimal cut set contribution at which to cut off. A new method was developed, which defines truncation limit in probabilistic safety assessment. The method specifies truncation limits with more stringency than presenting existing documents dealing with truncation criteria in probabilistic safety assessment do. The results of this paper indicate that the truncation limits for more complex probabilistic safety assessments, which consist of larger number of basic events, should be more severe than presently recommended in existing documents if more accuracy is desired. The truncation limits defined by the new method reduce the relative errors of importance measures and produce more accurate results for probabilistic safety assessment applications. The reduced relative errors of importance measures can prevent situations, where the acceptability of change of equipment under investigation according to RG 1.174 would be shifted from region, where changes can be accepted, to region, where changes cannot be accepted, if the results would be calculated with smaller truncation limit

  16. Truncated Calogero-Sutherland models

    Science.gov (United States)

    Pittman, S. M.; Beau, M.; Olshanii, M.; del Campo, A.

    2017-05-01

    A one-dimensional quantum many-body system consisting of particles confined in a harmonic potential and subject to finite-range two-body and three-body inverse-square interactions is introduced. The range of the interactions is set by truncation beyond a number of neighbors and can be tuned to interpolate between the Calogero-Sutherland model and a system with nearest and next-nearest neighbors interactions discussed by Jain and Khare. The model also includes the Tonks-Girardeau gas describing impenetrable bosons as well as an extension with truncated interactions. While the ground state wave function takes a truncated Bijl-Jastrow form, collective modes of the system are found in terms of multivariable symmetric polynomials. We numerically compute the density profile, one-body reduced density matrix, and momentum distribution of the ground state as a function of the range r and the interaction strength.

  17. Study and optimization of positioning algorithms for monolithic PET detectors blocks

    International Nuclear Information System (INIS)

    Acilu, P Garcia de; Sarasola, I; Canadas, M; Cuerdo, R; Mendes, P Rato; Romero, L; Willmott, C

    2012-01-01

    We are developing a PET insert for existing MRI equipment to be used in clinical PET/MR studies of the human brain. The proposed scanner is based on annihilation gamma detection with monolithic blocks of cerium-doped lutetium yttrium orthosilicate (LYSO:Ce) coupled to magnetically-compatible avalanche photodiodes (APD) matrices. The light distribution generated on the LYSO:Ce block provides the impinging position of the 511 keV photons by means of a positioning algorithm. Several positioning methods, from the simplest Anger Logic to more sophisticate supervised-learning Neural Networks (NN), can be implemented to extract the incidence position of gammas directly from the APD signals. Finally, an optimal method based on a two-step Feed-Forward Neural Network has been selected. It allows us to reach a resolution at detector level of 2 mm, and acquire images of point sources using a first BrainPET prototype consisting of two monolithic blocks working in coincidence. Neural networks provide a straightforward positioning of the acquired data once they have been trained, however the training process is usually time-consuming. In order to obtain an efficient positioning method for the complete scanner it was necessary to find a training procedure that reduces the data acquisition and processing time without introducing a noticeable degradation of the spatial resolution. A grouping process and posterior selection of the training data have been done regarding the similitude of the light distribution of events which have one common incident coordinate (transversal or longitudinal). By doing this, the amount of training data can be reduced to about 5% of the initial number with a degradation of spatial resolution lower than 10%.

  18. Lamp with a truncated reflector cup

    Science.gov (United States)

    Li, Ming; Allen, Steven C.; Bazydola, Sarah; Ghiu, Camil-Daniel

    2013-10-15

    A lamp assembly, and method for making same. The lamp assembly includes first and second truncated reflector cups. The lamp assembly also includes at least one base plate disposed between the first and second truncated reflector cups, and a light engine disposed on a top surface of the at least one base plate. The light engine is configured to emit light to be reflected by one of the first and second truncated reflector cups.

  19. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept

    Science.gov (United States)

    Lee, Ho; Fahimian, Benjamin P.; Xing, Lei

    2017-03-01

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  20. Powered Explicit Guidance Modifications and Enhancements for Space Launch System Block-1 and Block-1B Vehicles

    Science.gov (United States)

    Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt; Fill, Thomas

    2018-01-01

    NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. NASA is also currently designing the next evolution of SLS, the Block-1B. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm (of Space Shuttle heritage) for closed loop guidance. To accommodate vehicle capabilities and design for future evolutions of SLS, modifications were made to PEG for Block-1 to handle multi-phase burns, provide PEG updated propulsion information, and react to a core stage engine out. In addition, due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS) and EUS carrying out Lunar Vicinity and Earth Escape missions, certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions to account for long burn arcs and target translunar and hyperbolic orbits. This paper describes the design and implementation of modifications to the Block-1 PEG algorithm as compared to Space Shuttle. Furthermore, this paper illustrates challenges posed by the Block-1B vehicle and the required PEG enhancements. These improvements make PEG capable for use on the SLS Block-1B vehicle as part of the Guidance, Navigation, and Control (GN&C) System.

  1. Formal truncations of connected kernel equations

    International Nuclear Information System (INIS)

    Dixon, R.M.

    1977-01-01

    The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems

  2. Perspective on rainbow-ladder truncation

    International Nuclear Information System (INIS)

    Eichmann, G.; Alkofer, R.; Krassnigg, A.; Cloeet, I. C.; Roberts, C. D.

    2008-01-01

    Prima facie the systematic implementation of corrections to the rainbow-ladder truncation of QCD's Dyson-Schwinger equations will uniformly reduce in magnitude those calculated mass-dimensioned results for pseudoscalar and vector meson properties that are not tightly constrained by symmetries. The aim and interpretation of studies employing rainbow-ladder truncation are reconsidered in this light

  3. Maximum volume cuboids for arbitrarily shaped in-situ rock blocks as determined by discontinuity analysis—A genetic algorithm approach

    Science.gov (United States)

    Ülker, Erkan; Turanboy, Alparslan

    2009-07-01

    The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).

  4. Clustered survival data with left-truncation

    DEFF Research Database (Denmark)

    Eriksson, Frank; Martinussen, Torben; Scheike, Thomas H.

    2015-01-01

    Left-truncation occurs frequently in survival studies, and it is well known how to deal with this for univariate survival times. However, there are few results on how to estimate dependence parameters and regression effects in semiparametric models for clustered survival data with delayed entry....... Surprisingly, existing methods only deal with special cases. In this paper, we clarify different kinds of left-truncation and suggest estimators for semiparametric survival models under specific truncation schemes. The large-sample properties of the estimators are established. Small-sample properties...

  5. Homogenized blocked arcs for multicriteria optimization of radiotherapy: Analytical and numerical solutions

    International Nuclear Information System (INIS)

    Fenwick, John D.; Pardo-Montero, Juan

    2010-01-01

    Purpose: Homogenized blocked arcs are intuitively appealing as basis functions for multicriteria optimization of rotational radiotherapy. Such arcs avoid an organ-at-risk (OAR), spread dose out well over the rest-of-body (ROB), and deliver homogeneous doses to a planning target volume (PTV) using intensity modulated fluence profiles, obtainable either from closed-form solutions or iterative numerical calculations. Here, the analytic and iterative arcs are compared. Methods: Dose-distributions have been calculated for nondivergent beams, both including and excluding scatter, beam penumbra, and attenuation effects, which are left out of the derivation of the analytic arcs. The most straightforward analytic arc is created by truncating the well-known Brahme, Roos, and Lax (BRL) solution, cutting its uniform dose region down from an annulus to a smaller nonconcave region lying beyond the OAR. However, the truncation leaves behind high dose hot-spots immediately on either side of the OAR, generated by very high BRL fluence levels just beyond the OAR. These hot-spots can be eliminated using alternative analytical solutions ''C'' and ''L,'' which, respectively, deliver constant and linearly rising fluences in the gap region between the OAR and PTV (before truncation). Results: Measured in terms of PTV dose homogeneity, ROB dose-spread, and OAR avoidance, C solutions generate better arc dose-distributions than L when scatter, penumbra, and attenuation are left out of the dose modeling. Including these factors, L becomes the best analytical solution. However, the iterative approach generates better dose-distributions than any of the analytical solutions because it can account and compensate for penumbra and scatter effects. Using the analytical solutions as starting points for the iterative methodology, dose-distributions almost as good as those obtained using the conventional iterative approach can be calculated very rapidly. Conclusions: The iterative methodology is

  6. A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems

    Science.gov (United States)

    Sun, Xian-He; Joslin, Ronald D.

    1995-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.

  7. Optimal block-tridiagonalization of matrices for coherent charge transport

    International Nuclear Information System (INIS)

    Wimmer, Michael; Richter, Klaus

    2009-01-01

    Numerical quantum transport calculations are commonly based on a tight-binding formulation. A wide class of quantum transport algorithms require the tight-binding Hamiltonian to be in the form of a block-tridiagonal matrix. Here, we develop a matrix reordering algorithm based on graph partitioning techniques that yields the optimal block-tridiagonal form for quantum transport. The reordered Hamiltonian can lead to significant performance gains in transport calculations, and allows to apply conventional two-terminal algorithms to arbitrarily complex geometries, including multi-terminal structures. The block-tridiagonalization algorithm can thus be the foundation for a generic quantum transport code, applicable to arbitrary tight-binding systems. We demonstrate the power of this approach by applying the block-tridiagonalization algorithm together with the recursive Green's function algorithm to various examples of mesoscopic transport in two-dimensional electron gases in semiconductors and graphene.

  8. An enhanced block matching algorithm for fast elastic registration in adaptive radiotherapy

    International Nuclear Information System (INIS)

    Malsch, U; Thieke, C; Huber, P E; Bendl, R

    2006-01-01

    Image registration has many medical applications in diagnosis, therapy planning and therapy. Especially for time-adaptive radiotherapy, an efficient and accurate elastic registration of images acquired for treatment planning, and at the time of the actual treatment, is highly desirable. Therefore, we developed a fully automatic and fast block matching algorithm which identifies a set of anatomical landmarks in a 3D CT dataset and relocates them in another CT dataset by maximization of local correlation coefficients in the frequency domain. To transform the complete dataset, a smooth interpolation between the landmarks is calculated by modified thin-plate splines with local impact. The concept of the algorithm allows separate processing of image discontinuities like temporally changing air cavities in the intestinal track or rectum. The result is a fully transformed 3D planning dataset (planning CT as well as delineations of tumour and organs at risk) to a verification CT, allowing evaluation and, if necessary, changes of the treatment plan based on the current patient anatomy without time-consuming manual re-contouring. Typically the total calculation time is less than 5 min, which allows the use of the registration tool between acquiring the verification images and delivering the dose fraction for online corrections. We present verifications of the algorithm for five different patient datasets with different tumour locations (prostate, paraspinal and head-and-neck) by comparing the results with manually selected landmarks, visual assessment and consistency testing. It turns out that the mean error of the registration is better than the voxel resolution (2 x 2 x 3 mm 3 ). In conclusion, we present an algorithm for fully automatic elastic image registration that is precise and fast enough for online corrections in an adaptive fractionated radiation treatment course

  9. Solution of the Stieltjes truncated matrix moment problem

    Directory of Open Access Journals (Sweden)

    Vadim M. Adamyan

    2005-01-01

    Full Text Available The truncated Stieltjes matrix moment problem consisting in the description of all matrix distributions \\(\\boldsymbol{\\sigma}(t\\ on \\([0,\\infty\\ with given first \\(2n+1\\ power moments \\((\\mathbf{C}_j_{n=0}^j\\ is solved using known results on the corresponding Hamburger problem for which \\(\\boldsymbol{\\sigma}(t\\ are defined on \\((-\\infty,\\infty\\. The criterion of solvability of the Stieltjes problem is given and all its solutions in the non-degenerate case are described by selection of the appropriate solutions among those of the Hamburger problem for the same set of moments. The results on extensions of non-negative operators are used and a purely algebraic algorithm for the solution of both Hamburger and Stieltjes problems is proposed.

  10. Estimation of distribution algorithm with path relinking for the blocking flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2018-05-01

    This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.

  11. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  12. Improved Motion Estimation Using Early Zero-Block Detection

    Directory of Open Access Journals (Sweden)

    Y. Lin

    2008-07-01

    Full Text Available We incorporate the early zero-block detection technique into the UMHexagonS algorithm, which has already been adopted in H.264/AVC JM reference software, to speed up the motion estimation process. A nearly sufficient condition is derived for early zero-block detection. Although the conventional early zero-block detection method can achieve significant improvement in computation reduction, the PSNR loss, to whatever extent, is not negligible especially for high quantization parameter (QP or low bit-rate coding. This paper modifies the UMHexagonS algorithm with the early zero-block detection technique to improve its coding performance. The experimental results reveal that the improved UMHexagonS algorithm greatly reduces computation while maintaining very high coding efficiency.

  13. Implicit gas-kinetic unified algorithm based on multi-block docking grid for multi-body reentry flows covering all flow regimes

    Science.gov (United States)

    Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu

    2016-12-01

    Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single

  14. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  15. On the Analytical and Numerical Properties of the Truncated Laplace Transform

    Science.gov (United States)

    2014-05-01

    classical study of the truncated Fourier trans- form. The resulting algorithms are applicable to all environments likely to be encountered in applications...other words, (((La,b)∗ ◦ La,b) (un)) (t) = ∫ b a 1 t+ s un(s)ds = α 2 nun (t). (2.69) Observation 2.22. Similarly, La,b ◦ (La,b)∗ of a function g ∈ L2(0...3.20)) are even and odd functions in the regular sense: Un(s) = (Cγ(un)) (s) = (−1) nUn (−s). (3.25) In particular, at the point s = 0, we have: U2j+1(0

  16. Adaptive block online learning target tracking based on super pixel segmentation

    Science.gov (United States)

    Cheng, Yue; Li, Jianzeng

    2018-04-01

    Video target tracking technology under the unremitting exploration of predecessors has made big progress, but there are still lots of problems not solved. This paper proposed a new algorithm of target tracking based on image segmentation technology. Firstly we divide the selected region using simple linear iterative clustering (SLIC) algorithm, after that, we block the area with the improved density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm. Each sub-block independently trained classifier and tracked, then the algorithm ignore the failed tracking sub-block while reintegrate the rest of the sub-blocks into tracking box to complete the target tracking. The experimental results show that our algorithm can work effectively under occlusion interference, rotation change, scale change and many other problems in target tracking compared with the current mainstream algorithms.

  17. Design Optimization for a Truncated Catenary Mooring System for Scale Model Test

    Directory of Open Access Journals (Sweden)

    Climent Molins

    2015-11-01

    Full Text Available One of the main aspects when testing floating offshore platforms is the scaled mooring system, particularly with the increased depths where such platforms are intended. The paper proposes the use of truncated mooring systems to emulate the real mooring system by solving an optimization problem. This approach could be an interesting option when the existing testing facilities do not have enough available space. As part of the development of a new spar platform made of concrete for Floating Offshore Wind Turbines (FOWTs, called Windcrete, a station keeping system with catenary shaped lines was selected. The test facility available for the planned experiments had an important width constraint. Then, an algorithm to optimize the design of the scaled truncated mooring system using different weights of lines was developed. The optimization process adjusts the quasi-static behavior of the scaled mooring system as much as possible to the real mooring system within its expected maximum displacement range, where the catenary line provides the restoring forces by its suspended line length.

  18. Analysis of the upper-truncated Weibull distribution for wind speed

    International Nuclear Information System (INIS)

    Kantar, Yeliz Mert; Usta, Ilhan

    2015-01-01

    Highlights: • Upper-truncated Weibull distribution is proposed to model wind speed. • Upper-truncated Weibull distribution nests Weibull distribution as special case. • Maximum likelihood is the best method for upper-truncated Weibull distribution. • Fitting accuracy of upper-truncated Weibull is analyzed on wind speed data. - Abstract: Accurately modeling wind speed is critical in estimating the wind energy potential of a certain region. In order to model wind speed data smoothly, several statistical distributions have been studied. Truncated distributions are defined as a conditional distribution that results from restricting the domain of statistical distribution and they also cover base distribution. This paper proposes, for the first time, the use of upper-truncated Weibull distribution, in modeling wind speed data and also in estimating wind power density. In addition, a comparison is made between upper-truncated Weibull distribution and well known Weibull distribution using wind speed data measured in various regions of Turkey. The obtained results indicate that upper-truncated Weibull distribution shows better performance than Weibull distribution in estimating wind speed distribution and wind power. Therefore, upper-truncated Weibull distribution can be an alternative for use in the assessment of wind energy potential

  19. Minimum Description Length Block Finder, a Method to Identify Haplotype Blocks and to Compare the Strength of Block Boundaries

    OpenAIRE

    Mannila, H.; Koivisto, M.; Perola, M.; Varilo, T.; Hennah, W.; Ekelund, J.; Lukk, M.; Peltonen, L.; Ukkonen, E.

    2003-01-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the ...

  20. A block variant of the GMRES method on massively parallel processors

    Energy Technology Data Exchange (ETDEWEB)

    Li, Guangye [Cray Research, Inc., Eagan, MN (United States)

    1996-12-31

    This paper presents a block variant of the GMRES method for solving general unsymmetric linear systems. This algorithm generates a transformed Hessenberg matrix by solely using block matrix operations and block data communications. It is shown that this algorithm with block size s, denoted by BVGMRES(s,m), is theoretically equivalent to the GMRES(s*m) method. The numerical results show that this algorithm can be more efficient than the standard GMRES method on a cache based single CPU computer with optimized BLAS kernels. Furthermore, the gain in efficiency is more significant on MPPs due to both efficient block operations and efficient block data communications. Our numerical results also show that in comparison to the standard GMRES method, the more PEs that are used on an MPP, the more efficient the BVGMRES(s,m) algorithm is.

  1. Novel prediction- and subblock-based algorithm for fractal image compression

    International Nuclear Information System (INIS)

    Chung, K.-L.; Hsu, C.-H.

    2006-01-01

    Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated

  2. A Novel SCCA Approach via Truncated ℓ1-norm and Truncated Group Lasso for Brain Imaging Genetics.

    Science.gov (United States)

    Du, Lei; Liu, Kefei; Zhang, Tuo; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Han, Junwei; Guo, Lei; Saykin, Andrew J; Shen, Li

    2017-09-18

    Brain imaging genetics, which studies the linkage between genetic variations and structural or functional measures of the human brain, has become increasingly important in recent years. Discovering the bi-multivariate relationship between genetic markers such as single-nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is one major task in imaging genetics. Sparse Canonical Correlation Analysis (SCCA) has been a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants to induce sparsity. The ℓ 0 -norm penalty is a perfect sparsity-inducing tool which, however, is an NP-hard problem. In this paper, we propose the truncated ℓ 1 -norm penalized SCCA to improve the performance and effectiveness of the ℓ 1 -norm based SCCA methods. Besides, we propose an efficient optimization algorithms to solve this novel SCCA problem. The proposed method is an adaptive shrinkage method via tuning τ . It can avoid the time intensive parameter tuning if given a reasonable small τ . Furthermore, we extend it to the truncated group-lasso (TGL), and propose TGL-SCCA model to improve the group-lasso-based SCCA methods. The experimental results, compared with four benchmark methods, show that our SCCA methods identify better or similar correlation coefficients, and better canonical loading profiles than the competing methods. This demonstrates the effectiveness and efficiency of our methods in discovering interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/tlpscca/ . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. A Method for Improving the Progressive Image Coding Algorithms

    Directory of Open Access Journals (Sweden)

    Ovidiu COSMA

    2014-12-01

    Full Text Available This article presents a method for increasing the performance of the progressive coding algorithms for the subbands of images, by representing the coefficients with a code that reduces the truncation error.

  4. Minimum description length block finder, a method to identify haplotype blocks and to compare the strength of block boundaries.

    Science.gov (United States)

    Mannila, H; Koivisto, M; Perola, M; Varilo, T; Hennah, W; Ekelund, J; Lukk, M; Peltonen, L; Ukkonen, E

    2003-07-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates.

  5. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    Directory of Open Access Journals (Sweden)

    Yu-Fei Gao

    2017-04-01

    Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.

  6. A comparison of genetic algorithm and artificial bee colony approaches in solving blocking hybrid flowshop scheduling problem with sequence dependent setup/changeover times

    Directory of Open Access Journals (Sweden)

    Pongpan Nakkaew

    2016-06-01

    Full Text Available In manufacturing process where efficiency is crucial in order to remain competitive, flowshop is a common configuration in which machines are arranged in series and products are produced through the stages one by one. In certain production processes, the machines are frequently configured in the way that each production stage may contain multiple processing units in parallel or hybrid. Moreover, along with precedent conditions, the sequence dependent setup times may exist. Finally, in case there is no buffer, a machine is said to be blocked if the next stage to handle its output is being occupied. Such NP-Hard problem, referred as Blocking Hybrid Flowshop Scheduling Problem with Sequence Dependent Setup/Changeover Times, is usually not possible to find the best exact solution to satisfy optimization objectives such as minimization of the overall production time. Thus, it is usually solved by approximate algorithms such as metaheuristics. In this paper, we investigate comparatively the effectiveness of the two approaches: a Genetic Algorithm (GA and an Artificial Bee Colony (ABC algorithm. GA is inspired by the process of natural selection. ABC, in the same manner, resembles the way types of bees perform specific functions and work collectively to find their foods by means of division of labor. Additionally, we apply an algorithm to improve the GA and ABC algorithms so that they can take advantage of parallel processing resources of modern multiple core processors while eliminate the need for screening the optimal parameters of both algorithms in advance.

  7. New Schemes for Positive Real Truncation

    Directory of Open Access Journals (Sweden)

    Kari Unneland

    2007-07-01

    Full Text Available Model reduction, based on balanced truncation, of stable and of positive real systems are considered. An overview over some of the already existing techniques are given: Lyapunov balancing and stochastic balancing, which includes Riccati balancing. A novel scheme for positive real balanced truncation is then proposed, which is a combination of the already existing Lyapunov balancing and Riccati balancing. Using Riccati balancing, the solution of two Riccati equations are needed to obtain positive real reduced order systems. For the suggested method, only one Lyapunov equation and one Riccati equation are solved in order to obtain positive real reduced order systems, which is less computationally demanding. Further it is shown, that in order to get positive real reduced order systems, only one Riccati equation needs to be solved. Finally, this is used to obtain positive real frequency weighted balanced truncation.

  8. Measuring a Truncated Disk in Aquila X-1

    Science.gov (United States)

    King, Ashley L.; Tomsick, John A.; Miller, Jon M.; Chenevez, Jerome; Barret, Didier; Boggs, Steven E.; Chakrabarty, Deepto; Christensen, Finn E.; Craig, William W.; Feurst, Felix; hide

    2016-01-01

    We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe K(alpha) line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner radius of 15 +/- 3RG. The disk is likely truncated by either the boundary layer and/or a magnetic field. Associating the truncated inner disk with pressure from a magnetic field gives an upper limit of B < 5+/- 2x10(exp 8) G. Although the radius is truncated far from the stellar surface, material is still reaching the neutron star surface as evidenced by the X-ray burst present in the NuSTAR observation.

  9. Evolution of truncated moments of singlet parton distributions

    International Nuclear Information System (INIS)

    Forte, S.; Magnea, L.; Piccione, A.; Ridolfi, G.

    2001-01-01

    We define truncated Mellin moments of parton distributions by restricting the integration range over the Bjorken variable to the experimentally accessible subset x 0 ≤x≤1 of the allowed kinematic range 0≤x≤1. We derive the evolution equations satisfied by truncated moments in the general (singlet) case in terms of an infinite triangular matrix of anomalous dimensions which couple each truncated moment to all higher moments with orders differing by integers. We show that the evolution of any moment can be determined to arbitrarily good accuracy by truncating the system of coupled moments to a sufficiently large but finite size, and show how the equations can be solved in a way suitable for numerical applications. We discuss in detail the accuracy of the method in view of applications to precision phenomenology

  10. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  11. An algorithm for symplectic implicit Taylor-map tracking

    International Nuclear Information System (INIS)

    Yan, Y.; Channell, P.; Syphers, M.

    1992-10-01

    An algorithm has been developed for converting an ''order-by-order symplectic'' Taylor map that is truncated to an arbitrary order (thus not exactly symplectic) into a Courant-Snyder matrix and a symplectic implicit Taylor map for symplectic tracking. This algorithm is implemented using differential algebras, and it is numerically stable and fast. Thus, lifetime charged-particle tracking for large hadron colliders, such as the Superconducting Super Collider, is now made possible

  12. Heliostat blocking and shadowing efficiency in the video-game era

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Alberto [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Ramos, Francisco [Nevada Software Informatica S.L., Madrid (Spain)

    2014-02-15

    Blocking and shadowing is one of the key effects in designing and evaluating a thermal central receiver solar tower plant. Therefore it is convenient to develop efficient algorithms to compute the area of an heliostat blocked or shadowed by the rest of the field. In this paper we explore the possibility of using very efficient clipping algorithms developed for the video game and imaging industry to compute the blocking and shadowing efficiency of a solar thermal plant layout. We propose an algorithm valid for arbitrary position, orientation and size of the heliostats. This algorithm turns out to be very accurate, free of assumptions and fast. We show the feasibility of the use of this algorithm to the optimization of a solar plant by studying a couple of examples in detail.

  13. Heliostat blocking and shadowing efficiency in the video-game era

    International Nuclear Information System (INIS)

    Ramos, Alberto

    2014-02-01

    Blocking and shadowing is one of the key effects in designing and evaluating a thermal central receiver solar tower plant. Therefore it is convenient to develop efficient algorithms to compute the area of an heliostat blocked or shadowed by the rest of the field. In this paper we explore the possibility of using very efficient clipping algorithms developed for the video game and imaging industry to compute the blocking and shadowing efficiency of a solar thermal plant layout. We propose an algorithm valid for arbitrary position, orientation and size of the heliostats. This algorithm turns out to be very accurate, free of assumptions and fast. We show the feasibility of the use of this algorithm to the optimization of a solar plant by studying a couple of examples in detail.

  14. Data and performance profiles applying an adaptive truncation criterion, within linesearch-based truncated Newton methods, in large scale nonconvex optimization

    Directory of Open Access Journals (Sweden)

    Andrea Caliciotti

    2018-04-01

    Full Text Available In this paper, we report data and experiments related to the research article entitled “An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization” by Caliciotti et al. [1]. In particular, in Caliciotti et al. [1], large scale unconstrained optimization problems are considered by applying linesearch-based truncated Newton methods. In this framework, a key point is the reduction of the number of inner iterations needed, at each outer iteration, to approximately solving the Newton equation. A novel adaptive truncation criterion is introduced in Caliciotti et al. [1] to this aim. Here, we report the details concerning numerical experiences over a commonly used test set, namely CUTEst (Gould et al., 2015 [2]. Moreover, comparisons are reported in terms of performance profiles (Dolan and Moré, 2002 [3], adopting different parameters settings. Finally, our linesearch-based scheme is compared with a renowned trust region method, namely TRON (Lin and Moré, 1999 [4].

  15. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  16. Truncation in diffraction pattern analysis. Pt. 1

    International Nuclear Information System (INIS)

    Delhez, R.; Keijser, T.H. de; Mittemeijer, E.J.; Langford, J.I.

    1986-01-01

    An evaluation of the concept of a line profile is provoked by truncation of the range of intensity measurement in practice. The measured truncated line profile can be considered either as part of the total intensity distribution which peaks at or near the reciprocal-lattice points (approach 1), or as part of a component line profile which is confined to a single reciprocal-lattice point (approach 2). Some false conceptions in line-profile analysis can then be avoided and recipes can be developed for the extrapolation of the tails of the truncated line profile. Fourier analysis of line profiles, according to the first approach, implies a Fourier series development of the total intensity distribution defined within [l - 1/2, l + 1/2] (l indicates the node considered in reciprocal space); the second approach implies a Fourier transformation of the component line profile defined within [ - ∞, + ∞]. Exact descriptions of size broadening are provided by both approaches, whereas combined size and strain broadening can only be evaluated adequately within the first approach. Straightforward methods are given for obtaining truncation-corrected values for the average crystallite size. (orig.)

  17. Truncation Depth Rule-of-Thumb for Convolutional Codes

    Science.gov (United States)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  18. Investigation of propagation dynamics of truncated vector vortex beams.

    Science.gov (United States)

    Srinivas, P; Perumangatt, C; Lal, Nijil; Singh, R P; Srinivasan, B

    2018-06-01

    In this Letter, we experimentally investigate the propagation dynamics of truncated vector vortex beams generated using a Sagnac interferometer. Upon focusing, the truncated vector vortex beam is found to regain its original intensity structure within the Rayleigh range. In order to explain such behavior, the propagation dynamics of a truncated vector vortex beam is simulated by decomposing it into the sum of integral charge beams with associated complex weights. We also show that the polarization of the truncated composite vector vortex beam is preserved all along the propagation axis. The experimental observations are consistent with theoretical predictions based on previous literature and are in good agreement with our simulation results. The results hold importance as vector vortex modes are eigenmodes of the optical fiber.

  19. Diagnostic efficiency of truncated area under the curve from 0 to 2 h (AUC₀₋₂) of mycophenolic acid in kidney transplant recipients receiving mycophenolate mofetil and concomitant tacrolimus.

    Science.gov (United States)

    Lampón, Natalia; Tutor-Crespo, María J; Romero, Rafael; Tutor, José C

    2011-07-01

    Recently, the use of the truncated area under the curve from 0 to 2 h (AUC(0-2)) of mycophenolic acid (MPA) has been proposed for therapeutic monitoring in liver transplant recipients. The aim of our study was the evaluation of the clinical usefulness of truncated AUC(0-2) in kidney transplant patients. Plasma MPA was measured in samples taken before the morning dose of mycophenolate mofetil, and one-half and 2 h post-dose, completing 63 MPA concentration-time profiles from 40 adult kidney transplant recipients. The AUC from 0 to 12 h (AUC(0-12)) was calculated using the validated algorithm of Pawinski et al. The truncated AUC(0-2) was calculated using the linear trapezoidal rule, and extrapolated to 0-12 h (trapezoidal extrapolated AUC(0-12)) as previously described. Algorithm calculated and trapezoidal extrapolated AUC(0-12) values showed high correlation (r=0.995) and acceptable dispersion (ma68=0.71 μg·h/mL), median prediction error (6.6%) and median absolute prediction error (12.6%). The truncated AUC(0-2) had acceptable diagnostic efficiency (87%) in the classification of subtherapeutic, therapeutic or supratherapeutic values with respect to AUC(0-12). However, due to the high inter-individual variation of the drug absorption-rate, the dispersion between both pharmacokinetic variables (ma68=6.9 μg·h/mL) was unacceptable. The substantial dispersion between truncated AUC(0-2) and AUC(0-12) values may be a serious objection for the routine use of MPA AUC(0-2) in clinical practice.

  20. Wigner distribution function of circularly truncated light beams

    NARCIS (Netherlands)

    Bastiaans, M.J.; Nijhawan, O.P.; Gupta, A.K.; Musla, A.K.; Singh, Kehar

    1998-01-01

    Truncating a light beam is expressed as a convolution of its Wigner distribution function and the WDF of the truncating aperture. The WDF of a circular aperture is derived and an approximate expression - which is exact in the space and the spatial-frequency origin and whose integral over the spatial

  1. Error Concealment using Neural Networks for Block-Based Image Coding

    Directory of Open Access Journals (Sweden)

    M. Mokos

    2006-06-01

    Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.

  2. Closed Loop Guidance Trade Study for Space Launch System Block-1B Vehicle

    Science.gov (United States)

    Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt

    2018-01-01

    NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The design of the next evolution of SLS, Block-1B, is well underway. The Block-1B vehicle is more capable overall than Block-1; however, the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS) presents a challenge to the Powered Explicit Guidance (PEG) algorithm used by Block-1. To handle the long burn durations (on the order of 1000 seconds) of EUS missions, two algorithms were examined. An alternative algorithm, OPGUID, was introduced, while modifications were made to PEG. A trade study was conducted to select the guidance algorithm for future SLS vehicles. The chosen algorithm needs to support a wide variety of mission operations: ascent burns to LEO, apogee raise burns, trans-lunar injection burns, hyperbolic Earth departure burns, and contingency disposal burns using the Reaction Control System (RCS). Additionally, the algorithm must be able to respond to a single engine failure scenario. Each algorithm was scored based on pre-selected criteria, including insertion accuracy, algorithmic complexity and robustness, extensibility for potential future missions, and flight heritage. Monte Carlo analysis was used to select the final algorithm. This paper covers the design criteria, approach, and results of this trade study, showing impacts and considerations when adapting launch vehicle guidance algorithms to a broader breadth of in-space operations.

  3. Performance Evaluation of Block Acquisition and Tracking Algorithms Using an Open Source GPS Receiver Platform

    Science.gov (United States)

    Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.

    2011-01-01

    Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.

  4. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  5. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods.

    Science.gov (United States)

    James, Andrew J A; Konik, Robert M; Lecheminant, Philippe; Robinson, Neil J; Tsvelik, Alexei M

    2018-02-26

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1  +  1D quantum chromodynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.

  6. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods

    Science.gov (United States)

    James, Andrew J. A.; Konik, Robert M.; Lecheminant, Philippe; Robinson, Neil J.; Tsvelik, Alexei M.

    2018-04-01

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb–Liniger model, 1  +  1D quantum chromodynamics, as well as Landau–Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.

  7. Flexible scheme to truncate the hierarchy of pure states.

    Science.gov (United States)

    Zhang, P-P; Bentley, C D B; Eisfeld, A

    2018-04-07

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  8. Flexible scheme to truncate the hierarchy of pure states

    Science.gov (United States)

    Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.

    2018-04-01

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  9. Truncated predictor feedback for time-delay systems

    CERN Document Server

    Zhou, Bin

    2014-01-01

    This book provides a systematic approach to the design of predictor based controllers for (time-varying) linear systems with either (time-varying) input or state delays. Differently from those traditional predictor based controllers, which are infinite-dimensional static feedback laws and may cause difficulties in their practical implementation, this book develops a truncated predictor feedback (TPF) which involves only finite dimensional static state feedback. Features and topics: A novel approach referred to as truncated predictor feedback for the stabilization of (time-varying) time-delay systems in both the continuous-time setting and the discrete-time setting is built systematically Semi-global and global stabilization problems of linear time-delay systems subject to either magnitude saturation or energy constraints are solved in a systematic manner Both stabilization of a single system and consensus of a group of systems (multi-agent systems) are treated in a unified manner by applying the truncated pre...

  10. Error and symmetry analysis of Misner's algorithm for spherical harmonic decomposition on a cubic grid

    International Nuclear Information System (INIS)

    Fiske, David R

    2006-01-01

    Computing spherical harmonic decompositions is a ubiquitous technique that arises in a wide variety of disciplines and a large number of scientific codes. Because spherical harmonics are defined by integrals over spheres, however, one must perform some sort of interpolation in order to compute them when data are stored on a cubic lattice. Misner (2004 Class. Quantum Grav. 21 S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid, which has been found in real applications to be both efficient and robust to the presence of mesh refinement boundaries. At the same time, however, practical applications of the algorithm require knowledge of how the truncation errors of the algorithm depend on the various parameters in the algorithm. Based on analytic arguments and experience using the algorithm in real numerical simulations, I explore these dependences and provide a rule of thumb for choosing the parameters based on the truncation errors of the underlying data. I also demonstrate that symmetries in the spherical harmonics themselves allow for an even more efficient implementation of the algorithm than was suggested by Misner in his original paper

  11. Elfin: An algorithm for the computational design of custom three-dimensional structures from modular repeat protein building blocks.

    Science.gov (United States)

    Yeh, Chun-Ting; Brunette, T J; Baker, David; McIntosh-Smith, Simon; Parmeggiani, Fabio

    2018-02-01

    Computational protein design methods have enabled the design of novel protein structures, but they are often still limited to small proteins and symmetric systems. To expand the size of designable proteins while controlling the overall structure, we developed Elfin, a genetic algorithm for the design of novel proteins with custom shapes using structural building blocks derived from experimentally verified repeat proteins. By combining building blocks with compatible interfaces, it is possible to rapidly build non-symmetric large structures (>1000 amino acids) that match three-dimensional geometric descriptions provided by the user. A run time of about 20min on a laptop computer for a 3000 amino acid structure makes Elfin accessible to users with limited computational resources. Protein structures with controlled geometry will allow the systematic study of the effect of spatial arrangement of enzymes and signaling molecules, and provide new scaffolds for functional nanomaterials. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Configuration-defined control algorithms with the ASDEX Upgrade DCS

    Energy Technology Data Exchange (ETDEWEB)

    Treutterer, Wolfgang, E-mail: Wolfgang.Treutterer@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf Germany (Germany); Gräter, Alexander [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Lüddecke, Klaus [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf Germany (Germany); Neu, Gregor; Rapson, Christopher; Raupp, Gerhard; Zehetbauer, Thomas [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany)

    2016-11-15

    Highlights: • Control algorithm built from combination of pre-fabricated standard function blocks. • Seamless integration in multi-threaded computation context. • Block composition defined by configuration data, only. - Abstract: The ASDEX Upgrade Discharge Control System (DCS) is a distributed real-time control system executing complex control and monitoring tasks. Up to now, DCS control algorithms have been implemented by coding dedicated application processes with the C++ programming language. Algorithm changes required code modification, compilation and commissioning which only experienced programmers could perform. This was a significant constraint of flexibility for both control system operation and design. The new approach extends DCS with the capability of configuration-defined control algorithms. These are composed of chains of small, configurable standard function blocks providing general purpose functions like algebraic operations, filters, feedback controllers, output limiters and decision logic. In a later phase a graphical editor could help to compose and modify such configuration in a Simulink-like fashion. Building algorithms from standard functions can result in a high number of elements. In order to achieve a similar performance as with C++ coding, it is essential to avoid administrative bottlenecks by design. As a consequence, DCS executes a function block chain in the context of a single real-time thread of an application process. No concurrency issues as in a multi-threaded context need to be considered resulting in strongly simplified signal handling and zero performance overhead for inter-block communication. Instead of signal-driven synchronization, a block scheduler derives the execution sequence automatically from the block dependencies as defined in the configuration. All blocks and connecting signals are instantiated dynamically, based on definitions in a configuration file. Algorithms thus are not defined in the code but only in

  13. Decoding Synteny Blocks and Large-Scale Duplications in Mammalian and Plant Genomes

    Science.gov (United States)

    Peng, Qian; Alekseyev, Max A.; Tesler, Glenn; Pevzner, Pavel A.

    The existing synteny block reconstruction algorithms use anchors (e.g., orthologous genes) shared over all genomes to construct the synteny blocks for multiple genomes. This approach, while efficient for a few genomes, cannot be scaled to address the need to construct synteny blocks in many mammalian genomes that are currently being sequenced. The problem is that the number of anchors shared among all genomes quickly decreases with the increase in the number of genomes. Another problem is that many genomes (plant genomes in particular) had extensive duplications, which makes decoding of genomic architecture and rearrangement analysis in plants difficult. The existing synteny block generation algorithms in plants do not address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolution history of duplications. We present a new algorithm based on the A-Bruijn graph framework that overcomes these difficulties and provides a unified approach to synteny block reconstruction for multiple genomes, and for genomes with large duplications.

  14. Maximum nondiffracting propagation distance of aperture-truncated Airy beams

    Science.gov (United States)

    Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu

    2018-05-01

    Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.

  15. An iterative reconstruction from truncated projection data

    International Nuclear Information System (INIS)

    Anon.

    1985-01-01

    Various methods have been proposed for tomographic reconstruction from truncated projection data. In this paper, a reconstructive method is discussed which consists of iterations of filtered back-projection, reprojection and some nonlinear processings. First, the method is so constructed that it converges to a fixed point. Then, to examine its effectiveness, comparisons are made by computer experiments with two existing reconstructive methods for truncated projection data, that is, the method of extrapolation based on the smooth assumption followed by filtered back-projection, and modified additive ART

  16. Truncated Wigner dynamics and conservation laws

    Science.gov (United States)

    Drummond, Peter D.; Opanchuk, Bogdan

    2017-10-01

    Ultracold Bose gases can be used to experimentally test many-body theory predictions. Here we point out that both exact conservation laws and dynamical invariants exist in the topical case of the one-dimensional Bose gas, and these provide an important validation of methods. We show that the first four quantum conservation laws are exactly conserved in the approximate truncated Wigner approach to many-body quantum dynamics. Center-of-mass position variance is also exactly calculable. This is nearly exact in the truncated Wigner approximation, apart from small terms that vanish as N-3 /2 as N →∞ with fixed momentum cutoff. Examples of this are calculated in experimentally relevant, mesoscopic cases.

  17. Convergence and resolution recovery of block-iterative EM algorithms modeling 3D detector response in SPECT

    International Nuclear Information System (INIS)

    Lalush, D.S.; Tsui, B.M.W.; Karimi, S.S.

    1996-01-01

    We evaluate fast reconstruction algorithms including ordered subsets-EM (OS-EM) and Rescaled Block Iterative EM (RBI-EM) in fully 3D SPECT applications on the basis of their convergence and resolution recovery properties as iterations proceed. Using a 3D computer-simulated phantom consisting of 3D Gaussian objects, we simulated projection data that includes only the effects of sampling and detector response of a parallel-hole collimator. Reconstructions were performed using each of the three algorithms (ML-EM, OS-EM, and RBI-EM) modeling the 3D detector response in the projection function. Resolution recovery was evaluated by fitting Gaussians to each of the four objects in the iterated image estimates at selected intervals. Results show that OS-EM and RBI-EM behave identically in this case; their resolution recovery results are virtually indistinguishable. Their resolution behavior appears to be very similar to that of ML-EM, but accelerated by a factor of twenty. For all three algorithms, smaller objects take more iterations to converge. Next, we consider the effect noise has on convergence. For both noise-free and noisy data, we evaluate the log likelihood function at each subiteration of OS-EM and RBI-EM, and at each iteration of ML-EM. With noisy data, both OS-EM and RBI-EM give results for which the log-likelihood function oscillates. Especially for 180-degree acquisitions, RBI-EM oscillates less than OS-EM. Both OS-EM and RBI-EM appear to converge to solutions, but not to the ML solution. We conclude that both OS-EM and RBI-EM can be effective algorithms for fully 3D SPECT reconstruction. Both recover resolution similarly to ML-EM, only more quickly

  18. Weighted expectation maximization reconstruction algorithms with application to gated megavoltage tomography

    International Nuclear Information System (INIS)

    Zhang Jin; Shi Daxin; Anastasio, Mark A; Sillanpaa, Jussi; Chang Jenghwa

    2005-01-01

    We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)

  19. Inference for shared-frailty survival models with left-truncated data

    NARCIS (Netherlands)

    van den Berg, G.J.; Drepper, B.

    2016-01-01

    Shared-frailty survival models specify that systematic unobserved determinants of duration outcomes are identical within groups of individuals. We consider random-effects likelihood-based statistical inference if the duration data are subject to left-truncation. Such inference with left-truncated

  20. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    International Nuclear Information System (INIS)

    Guo, Yumeng; Zeng, Li

    2017-01-01

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  1. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)

    2017-01-11

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  2. Immature truncated O-glycophenotype of cancer directly induces oncogenic features

    DEFF Research Database (Denmark)

    Radhakrishnan, Prakash; Dabelsteen, Sally; Madsen, Frey Brus

    2014-01-01

    Aberrant expression of immature truncated O-glycans is a characteristic feature observed on virtually all epithelial cancer cells, and a very high frequency is observed in early epithelial premalignant lesions that precede the development of adenocarcinomas. Expression of the truncated O-glycan s...

  3. Duality quantum algorithm efficiently simulates open quantum systems

    Science.gov (United States)

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  4. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  5. A network flow algorithm to position tiles for LAMOST

    International Nuclear Information System (INIS)

    Li Guangwei; Zhao Gang

    2009-01-01

    We introduce the network flow algorithm used by the Sloan Digital Sky Survey (SDSS) into the sky survey of the Large sky Area Multi-Object fiber Spectroscopic Telescope (LAMOST) to position tiles. Because fibers in LAMOST's focal plane are distributed uniformly, we cannot use SDSS' method directly. To solve this problem, firstly we divide the sky into many small blocks, and we also assume that all the targets that are in the same block have the same position, which is the center of the block. Secondly, we give a value to limit the number of the targets that the LAMOST focal plane can collect in one square degree so that it cannot collect too many targets in one small block. Thirdly, because the network flow algorithm used in this paper is a bipartite network, we do not use the general solution algorithm that was used by SDSS. Instead, we give our new faster solution method for this special network. Compared with the Convergent Mean Shift Algorithm, the network flow algorithm can decrease observation times with improved mean imaging quality. This algorithm also has a very fast running speed. It can distribute millions of targets in a few minutes using a common personal computer.

  6. 3D Reasoning from Blocks to Stability.

    Science.gov (United States)

    Zhaoyin Jia; Gallagher, Andrew C; Saxena, Ashutosh; Chen, Tsuhan

    2015-05-01

    Objects occupy physical space and obey physical laws. To truly understand a scene, we must reason about the space that objects in it occupy, and how each objects is supported stably by each other. In other words, we seek to understand which objects would, if moved, cause other objects to fall. This 3D volumetric reasoning is important for many scene understanding tasks, ranging from segmentation of objects to perception of a rich 3D, physically well-founded, interpretations of the scene. In this paper, we propose a new algorithm to parse a single RGB-D image with 3D block units while jointly reasoning about the segments, volumes, supporting relationships, and object stability. Our algorithm is based on the intuition that a good 3D representation of the scene is one that fits the depth data well, and is a stable, self-supporting arrangement of objects (i.e., one that does not topple). We design an energy function for representing the quality of the block representation based on these properties. Our algorithm fits 3D blocks to the depth values corresponding to image segments, and iteratively optimizes the energy function. Our proposed algorithm is the first to consider stability of objects in complex arrangements for reasoning about the underlying structure of the scene. Experimental results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.

  7. Impact of degree truncation on the spread of a contagious process on networks.

    Science.gov (United States)

    Harling, Guy; Onnela, Jukka-Pekka

    2018-03-01

    Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue or study design, e.g., fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Past research has shown how FCD truncation affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and also used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes propagated on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.

  8. Application of a truncated normal failure distribution in reliability testing

    Science.gov (United States)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  9. Truncation scheme of time-dependent density-matrix approach II

    Energy Technology Data Exchange (ETDEWEB)

    Tohyama, Mitsuru [Kyorin University School of Medicine, Mitaka, Tokyo (Japan); Schuck, Peter [Institut de Physique Nucleaire, IN2P3-CNRS, Universite Paris-Sud, Orsay (France); Laboratoire de Physique et de Modelisation des Milieux Condenses, CNRS et Universite Joseph Fourier, Grenoble (France)

    2017-09-15

    A truncation scheme of the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy for reduced density matrices, where a three-body density matrix is approximated by two-body density matrices, is improved to take into account a normalization effect. The truncation scheme is tested for the Lipkin model. It is shown that the obtained results are in good agreement with the exact solutions. (orig.)

  10. Multiple-scattering theory with a truncated basis set

    International Nuclear Information System (INIS)

    Zhang, X.; Butler, W.H.

    1992-01-01

    Multiple-scattering theory (MST) is an extremely efficient technique for calculating the electronic structure of an assembly of atoms. The wave function in MST is expanded in terms of spherical waves centered on each atom and indexed by their orbital and azimuthal quantum numbers, l and m. The secular equation which determines the characteristic energies can be truncated at a value of the orbital angular momentum l max , for which the higher angular momentum phase shifts, δ l (l>l max ), are sufficiently small. Generally, the wave-function coefficients which are calculated from the secular equation are also truncated at l max . Here we point out that this truncation of the wave function is not necessary and is in fact inconsistent with the truncation of the secular equation. A consistent procedure is described in which the states with higher orbital angular momenta are retained but with their phase shifts set to zero. We show that this treatment gives smooth, continuous, and correctly normalized wave functions and that the total charge density calculated from the corresponding Green function agrees with the Lloyd formula result. We also show that this augmented wave function can be written as a linear combination of Andersen's muffin-tin orbitals in the case of muffin-tin potentials, and can be used to generalize the muffin-tin orbital idea to full-cell potentals

  11. A New Hybrid Approach for Wind Speed Prediction Using Fast Block Least Mean Square Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Ummuhan Basaran Filik

    2016-01-01

    Full Text Available A new hybrid wind speed prediction approach, which uses fast block least mean square (FBLMS algorithm and artificial neural network (ANN method, is proposed. FBLMS is an adaptive algorithm which has reduced complexity with a very fast convergence rate. A hybrid approach is proposed which uses two powerful methods: FBLMS and ANN method. In order to show the efficiency and accuracy of the proposed approach, seven-year real hourly collected wind speed data sets belonging to Turkish State Meteorological Service of Bozcaada and Eskisehir regions are used. Two different ANN structures are used to compare with this approach. The first six-year data is handled as a train set; the remaining one-year hourly data is handled as test data. Mean absolute error (MAE and root mean square error (RMSE are used for performance evaluations. It is shown for various cases that the performance of the new hybrid approach gives better results than the different conventional ANN structure.

  12. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu

    2013-01-01

    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  13. Importance-truncated shell model for multi-shell valence spaces

    Energy Technology Data Exchange (ETDEWEB)

    Stumpf, Christina; Vobig, Klaus; Roth, Robert [Institut fuer Kernphysik, TU Darmstadt (Germany)

    2016-07-01

    The valence-space shell model is one of the work horses in nuclear structure theory. In traditional applications, shell-model calculations are carried out using effective interactions constructed in a phenomenological framework for rather small valence spaces, typically spanned by one major shell. We improve on this traditional approach addressing two main aspects. First, we use new effective interactions derived in an ab initio approach and, thus, establish a connection to the underlying nuclear interaction providing access to single- and multi-shell valence spaces. Second, we extend the shell model to larger valence spaces by applying an importance-truncation scheme based on a perturbative importance measure. In this way, we reduce the model space to the relevant basis states for the description of a few target eigenstates and solve the eigenvalue problem in this physics-driven truncated model space. In particular multi-shell valence spaces are not tractable otherwise. We combine the importance-truncated shell model with refined extrapolation schemes to approximately recover the exact result. We present first results obtained in the importance-truncated shell model with the newly derived ab initio effective interactions for multi-shell valence spaces, e.g., the sdpf shell.

  14. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2012-01-01

    Full Text Available We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc. combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  15. Flow equation of quantum Einstein gravity in a higher-derivative truncation

    International Nuclear Information System (INIS)

    Lauscher, O.; Reuter, M.

    2002-01-01

    Motivated by recent evidence indicating that quantum Einstein gravity (QEG) might be nonperturbatively renormalizable, the exact renormalization group equation of QEG is evaluated in a truncation of theory space which generalizes the Einstein-Hilbert truncation by the inclusion of a higher-derivative term (R 2 ). The beta functions describing the renormalization group flow of the cosmological constant, Newton's constant, and the R 2 coupling are computed explicitly. The fixed point properties of the 3-dimensional flow are investigated, and they are confronted with those of the 2-dimensional Einstein-Hilbert flow. The non-Gaussian fixed point predicted by the latter is found to generalize to a fixed point on the enlarged theory space. In order to test the reliability of the R 2 truncation near this fixed point we analyze the residual scheme dependence of various universal quantities; it turns out to be very weak. The two truncations are compared in detail, and their numerical predictions are found to agree with a surprisingly high precision. Because of the consistency of the results it appears increasingly unlikely that the non-Gaussian fixed point is an artifact of the truncation. If it is present in the exact theory QEG is probably nonperturbatively renormalizable and ''asymptotically safe.'' We discuss how the conformal factor problem of Euclidean gravity manifests itself in the exact renormalization group approach and show that, in the R 2 truncation, the investigation of the fixed point is not afflicted with this problem. Also the Gaussian fixed point of the Einstein-Hilbert truncation is analyzed; it turns out that it does not generalize to a corresponding fixed point on the enlarged theory space

  16. Modification of transmission dose algorithm for irregularly shaped radiation field and tissue deficit

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Hyong Geon; Shin, Kyo Chul [Dankook Univ., College of Medicine, Seoul (Korea, Republic of); Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan [Seoul National Univ., College of Medicine, Seoul (Korea, Republic of); Lee, Hyoung Koo [The Catholic Univ., College of Medicine, Seoul (Korea, Republic of)

    2002-07-01

    Algorithm for estimation of transmission dose was modified for use in partially blocked radiation fields and in cases with tissue deficit. The beam data was measured with flat solid phantom in various conditions of beam block. And an algorithm for correction of transmission dose in cases of partially blocked radiation field was developed from the measured data. The algorithm was tested in some clinical settings with irregular shaped field. Also, another algorithm for correction of transmission dose for tissue deficit was developed by physical reasoning. This algorithm was tested in experimental settings with irregular contours mimicking breast cancer patients by using multiple sheets of solid phantoms. The algorithm for correction of beam block could accurately reflect the effect of beam block, with error within {+-}1.0%, both with square fields and irregularly shaped fields. The correction algorithm for tissue deficit could accurately reflect the effect of tissue deficit with errors within {+-}1.0% in most situations and within {+-}3.0% in experimental settings with irregular contours mimicking breast cancer treatment set-up. Developed algorithms could accurately estimate the transmission dose in most radiation treatment settings including irregularly shaped field and irregularly shaped body contour with tissue deficit in transmission dosimetry.

  17. Modification of transmission dose algorithm for irregularly shaped radiation field and tissue deficit

    International Nuclear Information System (INIS)

    Yun, Hyong Geon; Shin, Kyo Chul; Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2002-01-01

    Algorithm for estimation of transmission dose was modified for use in partially blocked radiation fields and in cases with tissue deficit. The beam data was measured with flat solid phantom in various conditions of beam block. And an algorithm for correction of transmission dose in cases of partially blocked radiation field was developed from the measured data. The algorithm was tested in some clinical settings with irregular shaped field. Also, another algorithm for correction of transmission dose for tissue deficit was developed by physical reasoning. This algorithm was tested in experimental settings with irregular contours mimicking breast cancer patients by using multiple sheets of solid phantoms. The algorithm for correction of beam block could accurately reflect the effect of beam block, with error within ±1.0%, both with square fields and irregularly shaped fields. The correction algorithm for tissue deficit could accurately reflect the effect of tissue deficit with errors within ±1.0% in most situations and within ±3.0% in experimental settings with irregular contours mimicking breast cancer treatment set-up. Developed algorithms could accurately estimate the transmission dose in most radiation treatment settings including irregularly shaped field and irregularly shaped body contour with tissue deficit in transmission dosimetry

  18. Stellar Disk Truncations: HI Density and Dynamics

    Science.gov (United States)

    Trujillo, Ignacio; Bakos, Judit

    2010-06-01

    Using HI Nearby Galaxy Survey (THINGS) 21-cm observations of a sample of nearby (nearly face-on) galaxies we explore whether the stellar disk truncation phenomenon produces any signature either in the HI gas density and/or in the gas dynamics. Recent cosmological simulations suggest that the origin of the break on the surface brightness distribution is produced by the appearance of a warp at the truncation position. This warp should produce a flaring on the gas distribution increasing the velocity dispersion of the HI component beyond the break. We do not find, however, any evidence of this increase in the gas velocity dispersion profile.

  19. Reducing C-terminal-truncated alpha-synuclein by immunotherapy attenuates neurodegeneration and propagation in Parkinson's disease-like models.

    Science.gov (United States)

    Games, Dora; Valera, Elvira; Spencer, Brian; Rockenstein, Edward; Mante, Michael; Adame, Anthony; Patrick, Christina; Ubhi, Kiren; Nuber, Silke; Sacayon, Patricia; Zago, Wagner; Seubert, Peter; Barbour, Robin; Schenk, Dale; Masliah, Eliezer

    2014-07-09

    Parkinson's disease (PD) and dementia with Lewy bodies (DLB) are common neurodegenerative disorders of the aging population, characterized by progressive and abnormal accumulation of α-synuclein (α-syn). Recent studies have shown that C-terminus (CT) truncation and propagation of α-syn play a role in the pathogenesis of PD/DLB. Therefore, we explored the effect of passive immunization against the CT of α-syn in the mThy1-α-syn transgenic (tg) mouse model, which resembles the striato-nigral and motor deficits of PD. Mice were immunized with the new monoclonal antibodies 1H7, 5C1, or 5D12, all directed against the CT of α-syn. CT α-syn antibodies attenuated synaptic and axonal pathology, reduced the accumulation of CT-truncated α-syn (CT-α-syn) in axons, rescued the loss of tyrosine hydroxylase fibers in striatum, and improved motor and memory deficits. Among them, 1H7 and 5C1 were most effective at decreasing levels of CT-α-syn and higher-molecular-weight aggregates. Furthermore, in vitro studies showed that preincubation of recombinant α-syn with 1H7 and 5C1 prevented CT cleavage of α-syn. In a cell-based system, CT antibodies reduced cell-to-cell propagation of full-length α-syn, but not of the CT-α-syn that lacked the 118-126 aa recognition site needed for antibody binding. Furthermore, the results obtained after lentiviral expression of α-syn suggest that antibodies might be blocking the extracellular truncation of α-syn by calpain-1. Together, these results demonstrate that antibodies against the CT of α-syn reduce levels of CT-truncated fragments of the protein and its propagation, thus ameliorating PD-like pathology and improving behavioral and motor functions in a mouse model of this disease. Copyright © 2014 the authors 0270-6474/14/349441-14$15.00/0.

  20. Dimensioning of multiservice links taking account of soft blocking

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk; Stepanov, S.N.; Kostrov, A.V.

    2006-01-01

    of a multiservice link taking into account the possibility of soft blocking. An approximate algorithm for estimation of main performance measures is constructed. The error of estimation is numerically studied for different types of soft blocking. The optimal procedure of dimensioning is suggested....

  1. Enhancing propagation characteristics of truncated localized waves in silica

    KAUST Repository

    Salem, Mohamed

    2011-07-01

    The spectral characteristics of truncated Localized Waves propagating in dispersive silica are analyzed. Numerical experiments show that the immunity of the truncated Localized Waves propagating in dispersive silica to decay and distortion is enhanced as the non-linearity of the relation between the transverse spatial spectral components and the wave vector gets stronger, in contrast to free-space propagating waves, which suffer from early decay and distortion. © 2011 IEEE.

  2. STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2012-03-01

    Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.

  3. Optical image-hiding method with false information disclosure based on the interference principle and partial-phase-truncation in the fractional Fourier domain

    International Nuclear Information System (INIS)

    Dai, Chaoqing; Wang, Xiaogang; Zhou, Guoquan; Chen, Junlang

    2014-01-01

    An image-hiding method based on the optical interference principle and partial-phase-truncation in the fractional Fourier domain is proposed. The primary image is converted into three phase-only masks (POMs) using an analytical algorithm involved partial-phase-truncation and a fast random pixel exchange process. A procedure of a fake silhouette for a decryption key is suggested to reinforce the encryption and give a hint of the position of the key. The fractional orders of FrFT effectively enhance the security of the system. In the decryption process, the POM with false information and the other two POMs are, respectively, placed in the input and fractional Fourier planes to recover the primary image. There are no unintended information disclosures and iterative computations involved in the proposed method. Simulation results are presented to verify the validity of the proposed approach. (letters)

  4. Truncation of CPC solar collectors and its effect on energy collection

    Science.gov (United States)

    Carvalho, M. J.; Collares-Pereira, M.; Gordon, J. M.; Rabl, A.

    1985-01-01

    Analytic expressions are derived for the angular acceptance function of two-dimensional compound parabolic concentrator solar collectors (CPC's) of arbitrary degree of truncation. Taking into account the effect of truncation on both optical and thermal losses in real collectors, the increase in monthly and yearly collectible energy is also evaluated. Prior analyses that have ignored the correct behavior of the angular acceptance function at large angles for truncated collectors are shown to be in error by 0-2 percent in calculations of yearly collectible energy for stationary collectors.

  5. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks....... In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs....

  6. An enhanced chaotic key-based RC5 block cipher adapted to image encryption

    Science.gov (United States)

    Faragallah, Osama S.

    2012-07-01

    RC5 is a block cipher that has several salient features such as adaptability to process different word lengths with a variable block size, a variable number of rounds and a variable-length secret key. However, RC5 can be broken with various attacks such as correlation attack, timing attack, known plaintext correlation attack and differential attacks, revealing weak security. We aimed to enhance the RC5 block cipher to be more secure and efficient for real-time applications while preserving its advantages. For this purpose, this article introduces a new approach based on strengthening both the confusion and diffusion operations by combining chaos and cryptographic primitive operations to produce round keys with better pseudo-random sequences. Comparative security analysis and performance evaluation of the enhanced RC5 block cipher (ERC5) with RC5, RC6 and chaotic block cipher algorithm (CBCA) are addressed. Several test images are used for inspecting the validity of the encryption and decryption algorithms. The experimental results show the superiority of the suggested enhanced RC5 (ERC5) block cipher to image encryption algorithms such as RC5, RC6 and CBCA from the security analysis and performance evaluation points of view.

  7. Space Launch Systems Block 1B Preliminary Navigation System Design

    Science.gov (United States)

    Oliver, T. Emerson; Park, Thomas; Anzalone, Evan; Smith, Austin; Strickland, Dennis; Patrick, Sean

    2018-01-01

    NASA is currently building the Space Launch Systems (SLS) Block 1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. In parallel, NASA is also designing the Block 1B launch vehicle. The Block 1B vehicle is an evolution of the Block 1 vehicle and extends the capability of the NASA launch vehicle. This evolution replaces the Interim Cryogenic Propulsive Stage (ICPS) with the Exploration Upper Stage (EUS). As the vehicle evolves to provide greater lift capability, increased robustness for manned missions, and the capability to execute more demanding missions so must the SLS Integrated Navigation System evolved to support those missions. This paper describes the preliminary navigation systems design for the SLS Block 1B vehicle. The evolution of the navigation hard-ware and algorithms from an inertial-only navigation system for Block 1 ascent flight to a tightly coupled GPS-aided inertial navigation system for Block 1B is described. The Block 1 GN&C system has been designed to meet a LEO insertion target with a specified accuracy. The Block 1B vehicle navigation system is de-signed to support the Block 1 LEO target accuracy as well as trans-lunar or trans-planetary injection accuracy. Additionally, the Block 1B vehicle is designed to support human exploration and thus is designed to minimize the probability of Loss of Crew (LOC) through high-quality inertial instruments and robust algorithm design, including Fault Detection, Isolation, and Recovery (FDIR) logic.

  8. Transiently truncated and differentially regulated expression of midkine during mouse embryogenesis

    International Nuclear Information System (INIS)

    Chen Qin; Yuan Yuanyang; Lin Shuibin; Chang Youde; Zhuo Xinming; Wei Wei; Tao Ping; Ruan Lingjuan; Li Qifu; Li Zhixing

    2005-01-01

    Midkine (MK) is a retinoic acid response cytokine, mostly expressed in embryonic tissues. Aberrant expression of MK was found in numerous cancers. In human, a truncated MK was expressed specifically in tumor/cancer tissues. Here we report the discovery of a novel truncated form of MK transiently expressed during normal mouse embryonic development. In addition, MK is concentrated at the interface between developing epithelium and mesenchyme as well as highly proliferating cells. Its expression, which is closely coordinated with angiogenesis and vasculogenesis, is spatiotemporally regulated with peaks in extensive organogenesis period and undifferentiated cells tailing off in maturing cells, implying its role in nascent blood vessel (endothelial) signaling of tissue differentiation and stem cell renewal/differentiation.. Cloning and sequencing analysis revealed that the embryonic truncated MK, in which the conserved domain is in-frame deleted, presumably producing a novel secreted small peptide, is different from the truncated form in human cancer tissues, whose deletion results in a frame-shift mutation. Our data suggest that MK may play a role in epithelium-mesenchyme interactions, blood vessel signaling, and the decision of proliferation vs differentiation. Detection of the transiently expressed truncated MK reveals its novel function in development and sheds light on its role in carcinogenesis

  9. Modifications of Geometric Truncation of the Scattering Phase Function

    Science.gov (United States)

    Radkevich, A.

    2017-12-01

    Phase function (PF) of light scattering on large atmospheric particles has very strong peak in forward direction constituting a challenge for accurate numerical calculations of radiance. Such accurate (and fast) evaluations are important in the problems of remote sensing of the atmosphere. Scaling transformation replaces original PF with a sum of the delta function and a new regular smooth PF. A number of methods to construct such a PF were suggested. Delta-M and delta-fit methods require evaluation of the PF moments which imposes a numerical problem if strongly anisotropic PF is given as a function of angle. Geometric truncation keeps the original PF unchanged outside the forward peak cone replacing it with a constant within the cone. This approach is designed to preserve the asymmetry parameter. It has two disadvantages: 1) PF has discontinuity at the cone; 2) the choice of the cone is subjective, no recommendations were provided on the choice of the truncation angle. This choice affects both truncation fraction and the value of the phase function within the forward cone. Both issues are addressed in this study. A simple functional form of the replacement PF is suggested. This functional form allows for a number of modifications. This study consider 3 versions providing continuous PF. The considered modifications also bear either of three properties: preserve asymmetry parameter, provide continuity of the 1st derivative of the PF, and preserve mean scattering angle. The second problem mentioned above is addressed with a heuristic approach providing unambiguous criterion of selection of the truncation angle. The approach showed good performance on liquid water and ice clouds with different particle size distributions. Suggested modifications were tested on different cloud PFs using both discrete ordinates and Monte Carlo methods. It was showed that the modifications provide better accuracy of the radiance computation compare to the original geometric truncation.

  10. Link adaptation algorithm for distributed coded transmissions in cooperative OFDMA systems

    DEFF Research Database (Denmark)

    Varga, Mihaly; Badiu, Mihai Alin; Bota, Vasile

    2015-01-01

    This paper proposes a link adaptation algorithm for cooperative transmissions in the down-link connection of an OFDMA-based wireless system. The algorithm aims at maximizing the spectral efficiency of a relay-aided communication link, while satisfying the block error rate constraints at both...... adaptation algorithm has linear complexity with the number of available resource blocks, while still provides a very good performance, as shown by simulation results....

  11. Block-conjugate-gradient method

    International Nuclear Information System (INIS)

    McCarthy, J.F.

    1989-01-01

    It is shown that by using the block-conjugate-gradient method several, say s, columns of the inverse Kogut-Susskind fermion matrix can be found simultaneously, in less time than it would take to run the standard conjugate-gradient algorithm s times. The method improves in efficiency relative to the standard conjugate-gradient algorithm as the fermion mass is decreased and as the value of the coupling is pushed to its limit before the finite-size effects become important. Thus it is potentially useful for measuring propagators in large lattice-gauge-theory calculations of the particle spectrum

  12. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    Science.gov (United States)

    Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox

    2015-01-01

    A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186

  13. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    Directory of Open Access Journals (Sweden)

    Chi-Jim Chen

    2015-03-01

    Full Text Available A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS, successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM, based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates.

  14. Propagation of truncated modified Laguerre-Gaussian beams

    Science.gov (United States)

    Deng, D.; Li, J.; Guo, Q.

    2010-01-01

    By expanding the circ function into a finite sum of complex Gaussian functions and applying the Collins formula, the propagation of hard-edge diffracted modified Laguerre-Gaussian beams (MLGBs) through a paraxial ABCD system is studied, and the approximate closed-form propagation expression of hard-edge diffracted MLGBs is obtained. The transverse intensity distribution of the MLGB carrying finite power can be characterized by a single bright and symmetric ring during propagation when the aperture radius is very large. Starting from the definition of the generalized truncated second-order moments, the beam quality factor of MLGBs through a hard-edged circular aperture is investigated in a cylindrical coordinate system, which turns out to be dependent on the truncated radius and the beam orders.

  15. Truncated Newton-Raphson Methods for Quasicontinuum Simulations

    National Research Council Canada - National Science Library

    Liang, Yu; Kanapady, Ramdev; Chung, Peter W

    2006-01-01

    .... In this research, we report the effectiveness of the truncated Newton-Raphson method and quasi-Newton method with low-rank Hessian update strategy that are evaluated against the full Newton-Raphson...

  16. Synthesis algorithm of VLSI multipliers for ASIC

    Science.gov (United States)

    Chua, O. H.; Eldin, A. G.

    1993-01-01

    Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.

  17. Seismic noise attenuation using an online subspace tracking algorithm

    Science.gov (United States)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  18. The Stars and Gas in Outer Parts of Galaxy Disks : Extended or Truncated, Flat or Warped?

    NARCIS (Netherlands)

    van der Kruit, P. C.; Funes, JG; Corsini, EM

    2008-01-01

    I review observations of truncations of stellar disks and models for their origin, compare observations of truncations in moderately inclined galaxies to those in edge-on systems and discuss the relation between truncations and H I-warps and their systematics and origin. Truncations are a common

  19. On truncated Taylor series and the position of their spurious zeros

    DEFF Research Database (Denmark)

    Christiansen, Søren; Madsen, Per A.

    2006-01-01

    A truncated Taylor series, or a Taylor polynomial, which may appear when treating the motion of gravity water waves, is obtained by truncating an infinite Taylor series for a complex, analytical function. For such a polynomial the position of the complex zeros is considered in case the Taylor...

  20. Quantum Partial Searching Algorithm of a Database with Several Target Items

    International Nuclear Information System (INIS)

    Pu-Cha, Zhong; Wan-Su, Bao; Yun, Wei

    2009-01-01

    Choi and Korepin [Quantum Information Processing 6(2007)243] presented a quantum partial search algorithm of a database with several target items which can find a target block quickly when each target block contains the same number of target items. Actually, the number of target items in each target block is arbitrary. Aiming at this case, we give a condition to guarantee performance of the partial search algorithm to be performed and the number of queries to oracle of the algorithm to be minimized. In addition, by further numerical computing we come to the conclusion that the more uniform the distribution of target items, the smaller the number of queries

  1. Wavelength converter placement for different RWA algorithms in wavelength-routed all-optical networks

    Science.gov (United States)

    Chu, Xiaowen; Li, Bo; Chlamtac, Imrich

    2002-07-01

    Sparse wavelength conversion and appropriate routing and wavelength assignment (RWA) algorithms are the two key factors in improving the blocking performance in wavelength-routed all-optical networks. It has been shown that the optimal placement of a limited number of wavelength converters in an arbitrary mesh network is an NP complete problem. There have been various heuristic algorithms proposed in the literature, in which most of them assume that a static routing and random wavelength assignment RWA algorithm is employed. However, the existing work shows that fixed-alternate routing and dynamic routing RWA algorithms can achieve much better blocking performance. Our study in this paper further demonstrates that the wavelength converter placement and RWA algorithms are closely related in the sense that a well designed wavelength converter placement mechanism for a particular RWA algorithm might not work well with a different RWA algorithm. Therefore, the wavelength converter placement and the RWA have to be considered jointly. The objective of this paper is to investigate the wavelength converter placement problem under fixed-alternate routing algorithm and least-loaded routing algorithm. Under the fixed-alternate routing algorithm, we propose a heuristic algorithm called Minimum Blocking Probability First (MBPF) algorithm for wavelength converter placement. Under the least-loaded routing algorithm, we propose a heuristic converter placement algorithm called Weighted Maximum Segment Length (WMSL) algorithm. The objective of the converter placement algorithm is to minimize the overall blocking probability. Extensive simulation studies have been carried out over three typical mesh networks, including the 14-node NSFNET, 19-node EON and 38-node CTNET. We observe that the proposed algorithms not only outperform existing wavelength converter placement algorithms by a large margin, but they also can achieve almost the same performance comparing with full wavelength

  2. Efficient block processing of long duration biotelemetric brain data for health care monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Soumya, I. [Department of E.I.E, GITAM University, Visakhapatnam (India); Zia Ur Rahman, M., E-mail: mdzr-5@ieee.org [Department of E.C.E, K.L. University, Vaddeswaram, Green Fields, Guntur, Andhra Pradesh (India); Rama Koti Reddy, D. V. [Department of Instrumentation Engineering, College of Engineering, Andhra University, Visakhapatnam (India); Lay-Ekuakille, A. [Department of Innovation Engineering, University of Salento, Lecce (Italy)

    2015-03-15

    In real time clinical environment, the brain signals which doctor need to analyze are usually very long. Such a scenario can be made simple by partitioning the input signal into several blocks and applying signal conditioning. This paper presents various block based adaptive filter structures for obtaining high resolution electroencephalogram (EEG) signals, which estimate the deterministic components of the EEG signal by removing noise. To process these long duration signals, we propose Time domain Block Least Mean Square (TDBLMS) algorithm for brain signal enhancement. In order to improve filtering capability, we introduce normalization in the weight update recursion of TDBLMS, which results TD-B-normalized-least mean square (LMS). To increase accuracy and resolution in the proposed noise cancelers, we implement the time domain cancelers in frequency domain which results frequency domain TDBLMS and FD-B-Normalized-LMS. Finally, we have applied these algorithms on real EEG signals obtained from human using Emotive Epoc EEG recorder and compared their performance with the conventional LMS algorithm. The results show that the performance of the block based algorithms is superior to the LMS counter-parts in terms of signal to noise ratio, convergence rate, excess mean square error, misadjustment, and coherence.

  3. Efficient block processing of long duration biotelemetric brain data for health care monitoring

    International Nuclear Information System (INIS)

    Soumya, I.; Zia Ur Rahman, M.; Rama Koti Reddy, D. V.; Lay-Ekuakille, A.

    2015-01-01

    In real time clinical environment, the brain signals which doctor need to analyze are usually very long. Such a scenario can be made simple by partitioning the input signal into several blocks and applying signal conditioning. This paper presents various block based adaptive filter structures for obtaining high resolution electroencephalogram (EEG) signals, which estimate the deterministic components of the EEG signal by removing noise. To process these long duration signals, we propose Time domain Block Least Mean Square (TDBLMS) algorithm for brain signal enhancement. In order to improve filtering capability, we introduce normalization in the weight update recursion of TDBLMS, which results TD-B-normalized-least mean square (LMS). To increase accuracy and resolution in the proposed noise cancelers, we implement the time domain cancelers in frequency domain which results frequency domain TDBLMS and FD-B-Normalized-LMS. Finally, we have applied these algorithms on real EEG signals obtained from human using Emotive Epoc EEG recorder and compared their performance with the conventional LMS algorithm. The results show that the performance of the block based algorithms is superior to the LMS counter-parts in terms of signal to noise ratio, convergence rate, excess mean square error, misadjustment, and coherence

  4. A Motion Estimation Algorithm Using DTCWT and ARPS

    Directory of Open Access Journals (Sweden)

    Unan Y. Oktiawati

    2013-09-01

    Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.

  5. Closed-form kinetic parameter estimation solution to the truncated data problem

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Kadrmas, Dan J; Gullberg, Grant T

    2010-01-01

    In a dedicated cardiac single photon emission computed tomography (SPECT) system, the detectors are focused on the heart and the background is truncated in the projections. Reconstruction using truncated data results in biased images, leading to inaccurate kinetic parameter estimates. This paper has developed a closed-form kinetic parameter estimation solution to the dynamic emission imaging problem. This solution is insensitive to the bias in the reconstructed images that is caused by the projection data truncation. This paper introduces two new ideas: (1) it includes background bias as an additional parameter to estimate, and (2) it presents a closed-form solution for compartment models. The method is based on the following two assumptions: (i) the amount of the bias is directly proportional to the truncated activities in the projection data, and (ii) the background concentration is directly proportional to the concentration in the myocardium. In other words, the method assumes that the image slice contains only the heart and the background, without other organs, that the heart is not truncated, and that the background radioactivity is directly proportional to the radioactivity in the blood pool. As long as the background activity can be modeled, the proposed method is applicable regardless of the number of compartments in the model. For simplicity, the proposed method is presented and verified using a single compartment model with computer simulations using both noiseless and noisy projections.

  6. A New Algorithm for Determining Ultimate Pit Limits Based on Network Optimization

    Directory of Open Access Journals (Sweden)

    Ali Asghar Khodayari

    2013-12-01

    Full Text Available One of the main concerns of the mining industry is to determine ultimate pit limits. Final pit is a collection of blocks, which can be removed with maximum profit while following restrictions on the slope of the mine’s walls. The size, location and final shape of an open-pit are very important in designing the location of waste dumps, stockpiles, processing plants, access roads and other surface facilities as well as in developing a production program. There are numerous methods for designing ultimate pit limits. Some of these methods, such as floating cone algorithm, are heuristic and do not guarantee to generate optimum pit limits. Other methods, like Lerchs–Grossmann algorithm, are rigorous and always generate the true optimum pit limits. In this paper, a new rigorous algorithm is introduced. The main logic in this method is that only positive blocks, which can pay costs of their overlying non-positive blocks, are able to appear in the final pit. Those costs may be paid either by positive block itself or jointly with other positive blocks, which have the same overlying negative blocks. This logic is formulated using a network model as a Linear Programming (LP problem. This algorithm can be applied to two- and three-dimension block models. Since there are many commercial programs available for solving LP problems, pit limits in large block models can be determined easily by using this method.

  7. Low-mode truncation methods in the sine-Gordon equation

    International Nuclear Information System (INIS)

    Xiong Chuyu.

    1991-01-01

    In this dissertation, the author studies the chaotic and coherent motions (i.e., low-dimensional chaotic attractor) in some near integrable partial differential equations, particularly the sine-Gordon equation and the nonlinear Schroedinger equation. In order to study the motions, he uses low mode truncation methods to reduce these partial differential equations to some truncated models (low-dimensional ordinary differential equations). By applying many methods available to low-dimensional ordinary differential equations, he can understand the low-dimensional chaotic attractor of PDE's much better. However, there are two important questions one needs to answer: (1) How many modes is good enough for the low mode truncated models to capture the dynamics uniformly? (2) Is the chaotic attractor in a low mode truncated model close to the chaotic attractor in the original PDE? And how close is? He has developed two groups of powerful methods to help to answer these two questions. They are the computation methods of continuation and local bifurcation, and local Lyapunov exponents and Lyapunov exponents. Using these methods, he concludes that the 2N-nls ODE is a good model for the sine-Gordon equation and the nonlinear Schroedinger equation provided one chooses a 'good' basis and uses 'enough' modes (where 'enough' depends on the parameters of the system but is small for the parameter studied here). Therefore, one can use 2N-nls ODE to study the chaos of PDE's in more depth

  8. Minimum BER Receiver Filters with Block Memory for Uplink DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Debbah Mérouane

    2008-01-01

    Full Text Available Abstract The problem of synchronous multiuser receiver design in the case of direct-sequence single-antenna code division multiple access (DS-CDMA uplink networks is studied over frequency selective fading channels. An exact expression for the bit error rate (BER is derived in the case of BPSK signaling. Moreover, an algorithm is proposed for finding the finite impulse response (FIR receiver filters with block memory such that the exact BER of the active users is minimized. Several properties of the minimum BER FIR filters with block memory are identified. The algorithm performance is found for scenarios with different channel qualities, spreading code lengths, receiver block memory size, near-far effects, and channel mismatch. For the BPSK constellation, the proposed FIR receiver structure with block memory has significant better BER with respect to and near-far resistance than the corresponding minimum mean square error (MMSE filters with block memory.

  9. Block spins and chirality in Heisenberg model on Kagome and triangular lattices

    International Nuclear Information System (INIS)

    Subrahmanyam, V.

    1994-01-01

    The spin-1/2 Heisenberg model (HM) is investigated using a block-spin renormalization approach on Kagome and triangular lattices. In both cases, after coarse graining the triangles on original lattice and truncation of the Hilbert space to the triangular ground state subspace, HM reduces to an effective model on a triangular lattice in terms of the triangular-block degrees of freedom viz. the spin and the chirality quantum numbers. The chirality part of the effective Hamiltonian captures the essential difference between the two lattices. It is seen that simple eigenstates can be constructed for the effective model whose energies serve as upper bounds on the exact ground state energy of HM, and chiral ordered variational states have high energies compared to the other variational states. (author). 12 refs, 2 figs

  10. A deblocking algorithm based on color psychology for display quality enhancement

    Science.gov (United States)

    Yeh, Chia-Hung; Tseng, Wen-Yu; Huang, Kai-Lin

    2012-12-01

    This article proposes a post-processing deblocking filter to reduce blocking effects. The proposed algorithm detects blocking effects by fusing the results of Sobel edge detector and wavelet-based edge detector. The filtering stage provides four filter modes to eliminate blocking effects at different color regions according to human color vision and color psychology analysis. Experimental results show that the proposed algorithm has better subjective and objective qualities for H.264/AVC reconstructed videos when compared to several existing methods.

  11. Parallel field line and stream line tracing algorithms for space physics applications

    Science.gov (United States)

    Toth, G.; de Zeeuw, D.; Monostori, G.

    2004-05-01

    Field line and stream line tracing is required in various space physics applications, such as the coupling of the global magnetosphere and inner magnetosphere models, the coupling of the solar energetic particle and heliosphere models, or the modeling of comets, where the multispecies chemical equations are solved along stream lines of a steady state solution obtained with single fluid MHD model. Tracing a vector field is an inherently serial process, which is difficult to parallelize. This is especially true when the data corresponding to the vector field is distributed over a large number of processors. We designed algorithms for the various applications, which scale well to a large number of processors. In the first algorithm the computational domain is divided into blocks. Each block is on a single processor. The algorithm folows the vector field inside the blocks, and calculates a mapping of the block surfaces. The blocks communicate the values at the coinciding surfaces, and the results are interpolated. Finally all block surfaces are defined and values inside the blocks are obtained. In the second algorithm all processors start integrating along the vector field inside the accessible volume. When the field line leaves the local subdomain, the position and other information is stored in a buffer. Periodically the processors exchange the buffers, and continue integration of the field lines until they reach a boundary. At that point the results are sent back to the originating processor. Efficiency is achieved by a careful phasing of computation and communication. In the third algorithm the results of a steady state simulation are stored on a hard drive. The vector field is contained in blocks. All processors read in all the grid and vector field data and the stream lines are integrated in parallel. If a stream line enters a block, which has already been integrated, the results can be interpolated. By a clever ordering of the blocks the execution speed can be

  12. Estimation of Panel Data Regression Models with Two-Sided Censoring or Truncation

    DEFF Research Database (Denmark)

    Alan, Sule; Honore, Bo E.; Hu, Luojia

    2014-01-01

    This paper constructs estimators for panel data regression models with individual speci…fic heterogeneity and two–sided censoring and truncation. Following Powell (1986) the estimation strategy is based on moment conditions constructed from re–censored or re–truncated residuals. While these moment...

  13. An effective detection algorithm for region duplication forgery in digital images

    Science.gov (United States)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  14. A generalized right truncated bivariate Poisson regression model with applications to health data.

    Science.gov (United States)

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  15. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  16. Diagonal Limit for Conformal Blocks in d Dimensions

    CERN Document Server

    Hogervorst, Matthijs; Rychkov, Slava

    2013-01-01

    Conformal blocks in any number of dimensions depend on two variables z, zbar. Here we study their restrictions to the special "diagonal" kinematics z = zbar, previously found useful as a starting point for the conformal bootstrap analysis. We show that conformal blocks on the diagonal satisfy ordinary differential equations, third-order for spin zero and fourth-order for the general case. These ODEs determine the blocks uniquely and lead to an efficient numerical evaluation algorithm. For equal external operator dimensions, we find closed-form solutions in terms of finite sums of 3F2 functions.

  17. Bounded real and positive real balanced truncation using Σ-normalised coprime factors

    NARCIS (Netherlands)

    Trentelman, H.L.

    2009-01-01

    In this article, we will extend the method of balanced truncation using normalised right coprime factors of the system transfer matrix to balanced truncation with preservation of half line dissipativity. Special cases are preservation of positive realness and bounded realness. We consider a half

  18. Algebraic dynamics solutions and algebraic dynamics algorithm for nonlinear ordinary differential equations

    Institute of Scientific and Technical Information of China (English)

    WANG; Shunjin; ZHANG; Hua

    2006-01-01

    The problem of preserving fidelity in numerical computation of nonlinear ordinary differential equations is studied in terms of preserving local differential structure and approximating global integration structure of the dynamical system.The ordinary differential equations are lifted to the corresponding partial differential equations in the framework of algebraic dynamics,and a new algorithm-algebraic dynamics algorithm is proposed based on the exact analytical solutions of the ordinary differential equations by the algebraic dynamics method.In the new algorithm,the time evolution of the ordinary differential system is described locally by the time translation operator and globally by the time evolution operator.The exact analytical piece-like solution of the ordinary differential equations is expressd in terms of Taylor series with a local convergent radius,and its finite order truncation leads to the new numerical algorithm with a controllable precision better than Runge Kutta Algorithm and Symplectic Geometric Algorithm.

  19. No chiral truncation of quantum log gravity?

    Science.gov (United States)

    Andrade, Tomás; Marolf, Donald

    2010-03-01

    At the classical level, chiral gravity may be constructed as a consistent truncation of a larger theory called log gravity by requiring that left-moving charges vanish. In turn, log gravity is the limit of topologically massive gravity (TMG) at a special value of the coupling (the chiral point). We study the situation at the level of linearized quantum fields, focussing on a unitary quantization. While the TMG Hilbert space is continuous at the chiral point, the left-moving Virasoro generators become ill-defined and cannot be used to define a chiral truncation. In a sense, the left-moving asymptotic symmetries are spontaneously broken at the chiral point. In contrast, in a non-unitary quantization of TMG, both the Hilbert space and charges are continuous at the chiral point and define a unitary theory of chiral gravity at the linearized level.

  20. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    Science.gov (United States)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  1. Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography

    Science.gov (United States)

    Borg, Leise; Jørgensen, Jakob S.; Frikel, Jürgen; Sporring, Jon

    2017-12-01

    Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance to sample is derived and verified numerically and with experimental data. The model accurately describes the arising variable-truncation artifacts across simulated variations of the experimental setup. Three variable-truncation artifact-reduction methods are proposed, all aimed at addressing sinogram discontinuities that are shown to be the source of the streaks. The ‘reduction to limited angle’ (RLA) method simply keeps only non-truncated projections; the ‘detector-directed smoothing’ (DDS) method smooths the discontinuities; while the ‘reflexive boundary condition’ (RBC) method enforces a zero derivative at the discontinuities. Experimental results using both simulated and real data show that the proposed methods effectively reduce variable-truncation artifacts. The RBC method is found to provide the best artifact reduction and preservation of image features using both visual and quantitative assessment. The analysis and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray tomography experiments.

  2. Amplitude reconstruction from complete photoproduction experiments and truncated partial-wave expansions

    International Nuclear Information System (INIS)

    Workman, R. L.; Tiator, L.; Wunderlich, Y.; Doring, M.; Haberzettl, H.

    2017-01-01

    Here, we compare the methods of amplitude reconstruction, for a complete experiment and a truncated partial-wave analysis, applied to the photoproduction of pseudoscalar mesons. The approach is pedagogical, showing in detail how the amplitude reconstruction (observables measured at a single energy and angle) is related to a truncated partial-wave analysis (observables measured at a single energy and a number of angles).

  3. Minimum BER Receiver Filters with Block Memory for Uplink DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Mérouane Debbah

    2008-05-01

    Full Text Available The problem of synchronous multiuser receiver design in the case of direct-sequence single-antenna code division multiple access (DS-CDMA uplink networks is studied over frequency selective fading channels. An exact expression for the bit error rate (BER is derived in the case of BPSK signaling. Moreover, an algorithm is proposed for finding the finite impulse response (FIR receiver filters with block memory such that the exact BER of the active users is minimized. Several properties of the minimum BER FIR filters with block memory are identified. The algorithm performance is found for scenarios with different channel qualities, spreading code lengths, receiver block memory size, near-far effects, and channel mismatch. For the BPSK constellation, the proposed FIR receiver structure with block memory has significant better BER with respect to Eb/N0 and near-far resistance than the corresponding minimum mean square error (MMSE filters with block memory.

  4. Modified Truncated Multiplicity Analysis to Improve Verification of Uranium Fuel Cycle Materials

    International Nuclear Information System (INIS)

    LaFleur, A.; Miller, K.; Swinhoe, M.; Belian, A.; Croft, S.

    2015-01-01

    Accurate verification of 235U enrichment and mass in UF6 storage cylinders and the UO2F2 holdup contained in the process equipment is needed to improve international safeguards and nuclear material accountancy at uranium enrichment plants. Small UF6 cylinders (1.5'' and 5'' diameter) are used to store the full range of enrichments from depleted to highly-enriched UF6. For independent verification of these materials, it is essential that the 235U mass and enrichment measurements do not rely on facility operator declarations. Furthermore, in order to be deployed by IAEA inspectors to detect undeclared activities (e.g., during complementary access), it is also imperative that the measurement technique is quick, portable, and sensitive to a broad range of 235U masses. Truncated multiplicity analysis is a technique that reduces the variance in the measured count rates by only considering moments 1, 2, and 3 of the multiplicity distribution. This is especially important for reducing the uncertainty in the measured doubles and triples rates in environments with a high cosmic ray background relative to the uranium signal strength. However, we believe that the existing truncated multiplicity analysis throws away too much useful data by truncating the distribution after the third moment. This paper describes a modified truncated multiplicity analysis method that determines the optimal moment to truncate the multiplicity distribution based on the measured data. Experimental measurements of small UF6 cylinders and UO2F2 working reference materials were performed at Los Alamos National Laboratory (LANL). The data were analyzed using traditional and modified truncated multiplicity analysis to determine the optimal moment to truncate the multiplicity distribution to minimize the uncertainty in the measured count rates. The results from this analysis directly support nuclear safeguards at enrichment plants and provide a more accurate verification method for UF6

  5. Modeling the Effect of APC Truncation on Destruction Complex Function in Colorectal Cancer Cells

    Science.gov (United States)

    Barua, Dipak; Hlavacek, William S.

    2013-01-01

    In colorectal cancer cells, APC, a tumor suppressor protein, is commonly expressed in truncated form. Truncation of APC is believed to disrupt degradation of β—catenin, which is regulated by a multiprotein complex called the destruction complex. The destruction complex comprises APC, Axin, β—catenin, serine/threonine kinases, and other proteins. The kinases and , which are recruited by Axin, mediate phosphorylation of β—catenin, which initiates its ubiquitination and proteosomal degradation. The mechanism of regulation of β—catenin degradation by the destruction complex and the role of truncation of APC in colorectal cancer are not entirely understood. Through formulation and analysis of a rule-based computational model, we investigated the regulation of β—catenin phosphorylation and degradation by APC and the effect of APC truncation on function of the destruction complex. The model integrates available mechanistic knowledge about site-specific interactions and phosphorylation of destruction complex components and is consistent with an array of published data. We find that the phosphorylated truncated form of APC can outcompete Axin for binding to β—catenin, provided that Axin is limiting, and thereby sequester β—catenin away from Axin and the Axin-recruited kinases and . Full-length APC also competes with Axin for binding to β—catenin; however, full-length APC is able, through its SAMP repeats, which bind Axin and which are missing in truncated oncogenic forms of APC, to bring β—catenin into indirect association with Axin and Axin-recruited kinases. Because our model indicates that the positive effects of truncated APC on β—catenin levels depend on phosphorylation of APC, at the first 20-amino acid repeat, and because phosphorylation of this site is mediated by , we suggest that is a potential target for therapeutic intervention in colorectal cancer. Specific inhibition of is predicted to limit binding of β—catenin to truncated

  6. A Lynden-Bell integral estimator for the tail index of right-truncated ...

    African Journals Online (AJOL)

    By means of a Lynden-Bell integral with deterministic threshold, Worms and Worms [A Lynden-Bell integral estimator for extremes of randomly truncated data. Statist. Probab. Lett. 2016; 109: 106-117] recently introduced an asymptotically normal estimator of the tail index for randomly right-truncated Pareto-type data.

  7. Truncated States Obtained by Iteration

    International Nuclear Information System (INIS)

    Cardoso, W. B.; Almeida, N. G. de

    2008-01-01

    We introduce the concept of truncated states obtained via iterative processes (TSI) and study its statistical features, making an analogy with dynamical systems theory (DST). As a specific example, we have studied TSI for the doubling and the logistic functions, which are standard functions in studying chaos. TSI for both the doubling and logistic functions exhibit certain similar patterns when their statistical features are compared from the point of view of DST

  8. Adaptive Watermarking Algorithm in DCT Domain Based on Chaos

    Directory of Open Access Journals (Sweden)

    Wenhao Wang

    2013-05-01

    Full Text Available In order to improve the security, robustness and invisibility of the digital watermarking, a new adaptive watermarking algorithm is proposed in this paper. Firstly, this algorithm uses chaos sequence, which Logistic chaotic mapping produces, to encrypt the watermark image. And then the original image is divided into many sub-blocks and discrete cosine transform (DCT.The watermark information is embedded into sub-blocks medium coefficients. With the features of Human Visual System (HVS and image texture sufficiently taken into account during embedding, the embedding intensity of watermark is able to adaptively adjust according to HVS and texture characteristic. The watermarking is embedded into the different sub-blocks coefficients. Experiment results haven shown that the proposed algorithm is robust against the attacks of general image processing methods, such as noise, cut, filtering and JPEG compression, and receives a good tradeoff between invisible and robustness, and better security.

  9. The Dynamics of Truncated Black Hole Accretion Disks. I. Viscous Hydrodynamic Case

    Energy Technology Data Exchange (ETDEWEB)

    Hogg, J. Drew; Reynolds, Christopher S. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States)

    2017-07-10

    Truncated accretion disks are commonly invoked to explain the spectro-temporal variability in accreting black holes in both small systems, i.e., state transitions in galactic black hole binaries (GBHBs), and large systems, i.e., low-luminosity active galactic nuclei (LLAGNs). In the canonical truncated disk model of moderately low accretion rate systems, gas in the inner region of the accretion disk occupies a hot, radiatively inefficient phase, which leads to a geometrically thick disk, while the gas in the outer region occupies a cooler, radiatively efficient phase that resides in the standard geometrically thin disk. Observationally, there is strong empirical evidence to support this phenomenological model, but a detailed understanding of the dynamics of truncated disks is lacking. We present a well-resolved viscous, hydrodynamic simulation that uses an ad hoc cooling prescription to drive a thermal instability and, hence, produce the first sustained truncated accretion disk. With this simulation, we perform a study of the dynamics, angular momentum transport, and energetics of a truncated disk. We find that the time variability introduced by the quasi-periodic transition of gas from efficient cooling to inefficient cooling impacts the evolution of the simulated disk. A consequence of the thermal instability is that an outflow is launched from the hot/cold gas interface, which drives large, sub-Keplerian convective cells into the disk atmosphere. The convective cells introduce a viscous θ − ϕ stress that is less than the generic r − ϕ viscous stress component, but greatly influences the evolution of the disk. In the truncated disk, we find that the bulk of the accreted gas is in the hot phase.

  10. A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains

    Directory of Open Access Journals (Sweden)

    Kemin Wang

    2014-01-01

    Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.

  11. Fast algorithms for transport models. Final report

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  12. Family Therapy for the "Truncated" Nuclear Family.

    Science.gov (United States)

    Zuk, Gerald H.

    1980-01-01

    The truncated nuclear family consists of a two-generation group in which conflict has produced a polarization of values. The single-parent family is at special risk. Go-between process enables the therapist to depolarize sharply conflicted values and reduce pathogenic relating. (Author)

  13. Molecular dynamics simulations of lipid bilayers : major artifacts due to truncating electrostatic interactions

    NARCIS (Netherlands)

    Patra, M.; Karttunen, M.E.J.; Hyvönen, M.T.; Falck, E.; Lindqvist, P.; Vattulainen, I.

    2003-01-01

    We study the influence of truncating the electrostatic interactions in a fully hydrated pure dipalmitoylphosphatidylcholine (DPPC) bilayer through 20 ns molecular dynamics simulations. The computations in which the electrostatic interactions were truncated are compared to similar simulations using

  14. A machine learning approach to create blocking criteria for record linkage.

    Science.gov (United States)

    Giang, Phan H

    2015-03-01

    Record linkage, a part of data cleaning, is recognized as one of most expensive steps in data warehousing. Most record linkage (RL) systems employ a strategy of using blocking filters to reduce the number of pairs to be matched. A blocking filter consists of a number of blocking criteria. Until recently, blocking criteria are selected manually by domain experts. This paper proposes a new method to automatically learn efficient blocking criteria for record linkage. Our method addresses the lack of sufficient labeled data for training. Unlike previous works, we do not consider a blocking filter in isolation but in the context of an accompanying matcher which is employed after the blocking filter. We show that given such a matcher, the labels (assigned to record pairs) that are relevant for learning are the labels assigned by the matcher (link/nonlink), not the labels assigned objectively (match/unmatch). This conclusion allows us to generate an unlimited amount of labeled data for training. We formulate the problem of learning a blocking filter as a Disjunctive Normal Form (DNF) learning problem and use the Probably Approximately Correct (PAC) learning theory to guide the development of algorithm to search for blocking filters. We test the algorithm on a real patient master file of 2.18 million records. The experimental results show that compared with filters obtained by educated guess, the optimal learned filters have comparable recall but reduce throughput (runtime) by an order-of-magnitude factor.

  15. On the viability of the truncated Israel–Stewart theory in cosmology

    International Nuclear Information System (INIS)

    Shogin, Dmitry; Amundsen, Per Amund; Hervik, Sigbjørn

    2015-01-01

    We apply the causal Israel–Stewart theory of irreversible thermodynamics to model the matter content of the Universe as a dissipative fluid with bulk and shear viscosity. Along with the full transport equations we consider their widely used truncated version. By implementing a dynamical systems approach to Bianchi type IV and V cosmological models with and without cosmological constant, we determine the future asymptotic states of such Universes and show that the truncated Israel–Stewart theory leads to solutions essentially different from the full theory. The solutions of the truncated theory may also manifest unphysical properties. Finally, we find that in the full theory shear viscosity can give a substantial rise to dissipative fluxes, driving the fluid extremely far from equilibrium, where the linear Israel–Stewart theory ceases to be valid. (paper)

  16. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  17. Trilateral market coupling. Algorithm appendix

    International Nuclear Information System (INIS)

    2006-03-01

    Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the participants in

  18. Trilateral market coupling. Algorithm appendix

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-03-15

    Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the

  19. Experimental Study on GFRP Surface Cracks Detection Using Truncated-Correlation Photothermal Coherence Tomography

    Science.gov (United States)

    Wang, Fei; Liu, Junyan; Mohummad, Oliullah; Wang, Yang

    2018-04-01

    In this paper, truncated-correlation photothermal coherence tomography (TC-PCT) was used as a nondestructive inspection technique to evaluate glass-fiber reinforced polymer (GFRP) composite surface cracks. Chirped-pulsed signal that combines linear frequency modulation and pulse excitation was proposed as an excitation signal to detect GFRP composite surface cracks. The basic principle of TC-PCT and extraction algorithm of the thermal wave signal feature was described. The comparison experiments between lock-in thermography, thermal wave radar imaging and chirped-pulsed photothermal radar for detecting GFRP artificial surface cracks were carried out. Experimental results illustrated that chirped-pulsed photothermal radar has the merits of high signal-to-noise ratio in detecting GFRP composite surface cracks. TC-PCT as a depth-resolved photothermal imaging modality was employed to enable three-dimensional visualization of GFRP composite surface cracks. The results showed that TC-PCT can effectively evaluate the cracks depth of GFRP composite.

  20. The mixing evolutionary algorithm : indepedent selection and allocation of trials

    NARCIS (Netherlands)

    C.H.M. van Kemenade

    1997-01-01

    textabstractWhen using an evolutionary algorithm to solve a problem involving building blocks we have to grow the building blocks and then mix these building blocks to obtain the (optimal) solution. Finding a good balance between the growing and the mixing process is a prerequisite to get a reliable

  1. Modeling of genetic algorithms with a finite population

    NARCIS (Netherlands)

    C.H.M. van Kemenade

    1997-01-01

    textabstractCross-competition between non-overlapping building blocks can strongly influence the performance of evolutionary algorithms. The choice of the selection scheme can have a strong influence on the performance of a genetic algorithm. This paper describes a number of different genetic

  2. Riesz Representation Theorem on Bilinear Spaces of Truncated Laurent Series

    Directory of Open Access Journals (Sweden)

    Sabarinsyah

    2017-06-01

    Full Text Available In this study a generalization of the Riesz representation theorem on non-degenerate bilinear spaces, particularly on spaces of truncated Laurent series, was developed. It was shown that any linear functional on a non-degenerate bilinear space is representable by a unique element of the space if and only if its kernel is closed. Moreover an explicit equivalent condition can be identified for the closedness property of the kernel when the bilinear space is a space of truncated Laurent series.

  3. Adaptive Step Size Gradient Ascent ICA Algorithm for Wireless MIMO Systems

    Directory of Open Access Journals (Sweden)

    Zahoor Uddin

    2018-01-01

    Full Text Available Independent component analysis (ICA is a technique of blind source separation (BSS used for separation of the mixed received signals. ICA algorithms are classified into adaptive and batch algorithms. Adaptive algorithms perform well in time-varying scenario with high-computational complexity, while batch algorithms have better separation performance in quasistatic channels with low-computational complexity. Amongst batch algorithms, the gradient-based ICA algorithms perform well, but step size selection is critical in these algorithms. In this paper, an adaptive step size gradient ascent ICA (ASS-GAICA algorithm is presented. The proposed algorithm is free from selection of the step size parameter with improved convergence and separation performance. Different performance evaluation criteria are used to verify the effectiveness of the proposed algorithm. Performance of the proposed algorithm is compared with the FastICA and optimum block adaptive ICA (OBAICA algorithms for quasistatic and time-varying wireless channels. Simulation is performed over quadrature amplitude modulation (QAM and binary phase shift keying (BPSK signals. Results show that the proposed algorithm outperforms the FastICA and OBAICA algorithms for a wide range of signal-to-noise ratio (SNR and input data block lengths.

  4. Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2017-08-01

    Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed

  5. Pressure-sensitive paint on a truncated cone in hypersonic flow at incidences

    International Nuclear Information System (INIS)

    Yang, L.; Erdem, E.; Zare-Behtash, H.; Kontis, K.; Saravanan, S.

    2012-01-01

    Highlights: ► Global pressure map over the truncated cone is obtained at various incidence angles in Mach 5 flow. ► Successful application of AA-PSP in hypersonic flow expands operation area of this technique. ► AA-PSP reveals complex three-dimensional pattern which is difficult for transducer to obtain. ► Quantitative data provides strong correlation with colour Schlieren and oil flow results. ► High spatial resolution pressure mappings identify small scale vortices and flow separation. - Abstract: The flow over a truncated cone is a classical and fundamental problem for aerodynamic research due to its three-dimensional and complicated characteristics. The flow is made more complex when examining high angles of incidence. Recently these types of flows have drawn more attention for the purposes of drag reduction in supersonic/hypersonic flows. In the present study the flow over a truncated cone at various incidences was experimentally investigated in a Mach 5 flow with a unit Reynolds number of 13.5 × 10 6 m −1 . The cone semi-apex angle is 15° and the truncation ratio (truncated length/cone length) is 0.5. The incidence of the model varied from −12° to 12° with 3° intervals relative to the freestream direction. The external flow around the truncated cone was visualised by colour Schlieren photography, while the surface flow pattern was revealed using the oil flow method. The surface pressure distribution was measured using the anodized aluminium pressure-sensitive paint (AA-PSP) technique. Both top and sideviews of the pressure distribution on the model surface were acquired at various incidences. AA-PSP showed high pressure sensitivity and captured the complicated flow structures which correlated well with the colour Schlieren and oil flow visualisation results.

  6. Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.

    Science.gov (United States)

    Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2018-02-15

    Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. A convergence analysis for a sweeping preconditioner for block tridiagonal systems of linear equations

    KAUST Repository

    Bagci, Hakan; Pasciak, Joseph E.; Sirenko, Kostyantyn

    2014-01-01

    We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.

  8. A convergence analysis for a sweeping preconditioner for block tridiagonal systems of linear equations

    KAUST Repository

    Bagci, Hakan

    2014-11-11

    We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.

  9. Nearest Neighbour Corner Points Matching Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Changlong

    2015-01-01

    Full Text Available Accurate detection towards the corners plays an important part in camera calibration. To deal with the instability and inaccuracies of present corner detection algorithm, the nearest neighbour corners match-ing detection algorithms was brought forward. First, it dilates the binary image of the photographed pictures, searches and reserves quadrilateral outline of the image. Second, the blocks which accord with chess-board-corners are classified into a class. If too many blocks in class, it will be deleted; if not, it will be added, and then let the midpoint of the two vertex coordinates be the rough position of corner. At last, it precisely locates the position of the corners. The Experimental results have shown that the algorithm has obvious advantages on accuracy and validity in corner detection, and it can give security for camera calibration in traffic accident measurement.

  10. On truncations of the exact renormalization group

    CERN Document Server

    Morris, T R

    1994-01-01

    We investigate the Exact Renormalization Group (ERG) description of (Z_2 invariant) one-component scalar field theory, in the approximation in which all momentum dependence is discarded in the effective vertices. In this context we show how one can perform a systematic search for non-perturbative continuum limits without making any assumption about the form of the lagrangian. Concentrating on the non-perturbative three dimensional Wilson fixed point, we then show that the sequence of truncations n=2,3,\\dots, obtained by expanding about the field \\varphi=0 and discarding all powers \\varphi^{2n+2} and higher, yields solutions that at first converge to the answer obtained without truncation, but then cease to further converge beyond a certain point. No completely reliable method exists to reject the many spurious solutions that are also found. These properties are explained in terms of the analytic behaviour of the untruncated solutions -- which we describe in some detail.

  11. Resonant Excitation of a Truncated Metamaterial Cylindrical Shell by a Thin Wire Monopole

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Erentok, Aycan; Breinbjerg, Olav

    2009-01-01

    A truncated metamaterial cylindrical shell excited by a thin wire monopole is investigated using the integral equation technique as well as the finite element method. Simulations reveal a strong field singularity at the edge of the truncated cylindrical shell, which critically affects the matching...

  12. Squeezing in multi-mode nonlinear optical state truncation

    International Nuclear Information System (INIS)

    Said, R.S.; Wahiddin, M.R.B.; Umarov, B.A.

    2007-01-01

    In this Letter, we show that multi-mode qubit states produced via nonlinear optical state truncation driven by classical external pumpings exhibit squeezing condition. We restrict our discussions to the two- and three-mode cases

  13. Robust and Adaptive Block Tracking Method Based on Particle Filter

    Directory of Open Access Journals (Sweden)

    Bin Sun

    2015-10-01

    Full Text Available In the field of video analysis and processing, object tracking is attracting more and more attention especially in traffic management, digital surveillance and so on. However problems such as objects’ abrupt motion, occlusion and complex target structures would bring difficulties to academic study and engineering application. In this paper, a fragmentsbased tracking method using the block relationship coefficient is proposed. In this method, we use particle filter algorithm and object region is divided into blocks initially. The contribution of this method is that object features are not extracted just from a single block, the relationship between current block and its neighbor blocks are extracted to describe the variation of the block. Each block is weighted according to the block relationship coefficient when the block is voted on the most matched region in next frame. This method can make full use of the relationship between blocks. The experimental results demonstrate that our method can provide good performance in condition of occlusion and abrupt posture variation.

  14. BCYCLIC: A parallel block tridiagonal matrix cyclic solver

    Science.gov (United States)

    Hirshman, S. P.; Perumalla, K. S.; Lynch, V. E.; Sanchez, R.

    2010-09-01

    A block tridiagonal matrix is factored with minimal fill-in using a cyclic reduction algorithm that is easily parallelized. Storage of the factored blocks allows the application of the inverse to multiple right-hand sides which may not be known at factorization time. Scalability with the number of block rows is achieved with cyclic reduction, while scalability with the block size is achieved using multithreaded routines (OpenMP, GotoBLAS) for block matrix manipulation. This dual scalability is a noteworthy feature of this new solver, as well as its ability to efficiently handle arbitrary (non-powers-of-2) block row and processor numbers. Comparison with a state-of-the art parallel sparse solver is presented. It is expected that this new solver will allow many physical applications to optimally use the parallel resources on current supercomputers. Example usage of the solver in magneto-hydrodynamic (MHD), three-dimensional equilibrium solvers for high-temperature fusion plasmas is cited.

  15. truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models

    Directory of Open Access Journals (Sweden)

    Maria Karlsson

    2014-05-01

    Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.

  16. Motion of isolated open vortex filaments evolving under the truncated local induction approximation

    Science.gov (United States)

    Van Gorder, Robert A.

    2017-11-01

    The study of nonlinear waves along open vortex filaments continues to be an area of active research. While the local induction approximation (LIA) is attractive due to locality compared with the non-local Biot-Savart formulation, it has been argued that LIA appears too simple to model some relevant features of Kelvin wave dynamics, such as Kelvin wave energy transfer. Such transfer of energy is not feasible under the LIA due to integrability, so in order to obtain a non-integrable model, a truncated LIA, which breaks the integrability of the classical LIA, has been proposed as a candidate model with which to study such dynamics. Recently Laurie et al. ["Interaction of Kelvin waves and nonlocality of energy transfer in superfluids," Phys. Rev. B 81, 104526 (2010)] derived truncated LIA systematically from Biot-Savart dynamics. The focus of the present paper is to study the dynamics of a section of common open vortex filaments under the truncated LIA dynamics. We obtain the analog of helical, planar, and more general filaments which rotate without a change in form in the classical LIA, demonstrating that while quantitative differences do exist, qualitatively such solutions still exist under the truncated LIA. Conversely, solitons and breather solutions found under the LIA should not be expected under the truncated LIA, as the existence of such solutions relies on the existence of an infinite number of conservation laws which is violated due to loss of integrability. On the other hand, similarity solutions under the truncated LIA can be quite different to their counterparts found for the classical LIA, as they must obey a t1/3 type scaling rather than the t1/2 type scaling commonly found in the LIA and Biot-Savart dynamics. This change in similarity scaling means that Kelvin waves are radiated at a slower rate from vortex kinks formed after reconnection events. The loss of soliton solutions and the difference in similarity scaling indicate that dynamics emergent under

  17. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  18. The combination of i-leader truncation and gemcitabine improves oncolytic adenovirus efficacy in an immunocompetent model.

    Science.gov (United States)

    Puig-Saus, C; Laborda, E; Rodríguez-García, A; Cascalló, M; Moreno, R; Alemany, R

    2014-02-01

    Adenovirus (Ad) i-leader protein is a small protein of unknown function. The C-terminus truncation of the i-leader protein increases Ad release from infected cells and cytotoxicity. In the current study, we use the i-leader truncation to enhance the potency of an oncolytic Ad. In vitro, an i-leader truncated oncolytic Ad is released faster to the supernatant of infected cells, generates larger plaques, and is more cytotoxic in both human and Syrian hamster cell lines. In mice bearing human tumor xenografts, the i-leader truncation enhances oncolytic efficacy. However, in a Syrian hamster pancreatic tumor model, which is immunocompetent and less permissive to human Ad, antitumor efficacy is only observed when the i-leader truncated oncolytic Ad, but not the non-truncated version, is combined with gemcitabine. This synergistic effect observed in the Syrian hamster model was not seen in vitro or in immunodeficient mice bearing the same pancreatic hamster tumors, suggesting a role of the immune system in this synergism. These results highlight the interest of the i-leader C-terminus truncation because it enhances the antitumor potency of an oncolytic Ad and provides synergistic effects with gemcitabine in the presence of an immune competent system.

  19. Local and accumulated truncation errors in a class of perturbative numerical methods

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.; Corciovei, A.

    1980-01-01

    The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)

  20. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  1. Frequency interval balanced truncation of discrete-time bilinear systems

    DEFF Research Database (Denmark)

    Jazlan, Ahmad; Sreeram, Victor; Shaker, Hamid Reza

    2016-01-01

    This paper presents the development of a new model reduction method for discrete-time bilinear systems based on the balanced truncation framework. In many model reduction applications, it is advantageous to analyze the characteristics of the system with emphasis on particular frequency intervals...... are the solution to a pair of new generalized Lyapunov equations. The conditions for solvability of these new generalized Lyapunov equations are derived and a numerical solution method for solving these generalized Lyapunov equations is presented. Numerical examples which illustrate the usage of the new...... generalized frequency interval controllability and observability gramians as part of the balanced truncation framework are provided to demonstrate the performance of the proposed method....

  2. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  3. Clinically oriented device programming in bradycardia patients: part 2 (atrioventricular blocks and neurally mediated syncope). Proposals from AIAC (Italian Association of Arrhythmology and Cardiac Pacing).

    Science.gov (United States)

    Palmisano, Pietro; Ziacchi, Matteo; Biffi, Mauro; Ricci, Renato P; Landolina, Maurizio; Zoni-Berisso, Massimo; Occhetta, Eraldo; Maglia, Giampiero; Botto, Gianluca; Padeletti, Luigi; Boriani, Giuseppe

    2018-04-01

    : The purpose of this two-part consensus document is to provide specific suggestions (based on an extensive literature review) on appropriate pacemaker setting in relation to patients' clinical features. In part 2, criteria for pacemaker choice and programming in atrioventricular blocks and neurally mediate syncope are proposed. The atrioventricular blocks can be paroxysmal or persistent, isolated or associated with sinus node disease. Neurally mediated syncope can be related to carotid sinus syndrome or cardioinhibitory vasovagal syncope. In sinus rhythm, with persistent atrioventricular block, we considered appropriate the activation of mode-switch algorithms, and algorithms for auto-adaptive management of the ventricular pacing output. If the atrioventricular block is paroxysmal, in addition to algorithms mentioned above, algorithms to maximize intrinsic atrioventricular conduction should be activated. When sinus node disease is associated with atrioventricular block, the activation of rate-responsive function in patients with chronotropic incompetence is appropriate. In permanent atrial fibrillation with atrioventricular block, algorithms for auto-adaptive management of the ventricular pacing output should be activated. If the atrioventricular block is persistent, the activation of rate-responsive function is appropriate. In carotid sinus syndrome, adequate rate hysteresis should be programmed. In vasovagal syncope, specialized sensing and pacing algorithms designed for reflex syncope prevention should be activated.

  4. Computational strategies for the automated design of RNA nanoscale structures from building blocks using NanoTiler.

    Science.gov (United States)

    Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O'Connor, Mary; Shapiro, Bruce A

    2008-10-01

    One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes.

  5. Computational strategies for the automated design of RNA nanoscale structures from building blocks using NanoTiler☆

    Science.gov (United States)

    Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O’Connor, Mary; Shapiro, Bruce A.

    2013-01-01

    One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes. PMID:18838281

  6. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  7. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  8. High-yield water-based synthesis of truncated silver nanocubes

    International Nuclear Information System (INIS)

    Chang, Yun-Min; Lu, I-Te; Chen, Chih-Yuan; Hsieh, Yu-Chi; Wu, Pu-Wei

    2014-01-01

    Highlights: • Development of a water-based formula to fabricate truncated Ag nanocubes. • The sample exhibits (1 0 0), (1 1 0), and (1 1 1) on the facets, edges, and corners. • The sample shows three characteristic absorption peaks due to plasma resonance. -- Abstract: A high-yield water-based hydrothermal synthesis was developed using silver nitrate, ammonia, glucose, and cetyltrimethylammonium bromide (CTAB) as precursors to synthesize truncated silver nanocubes with uniform sizes and in large quantities. With a fixed CTAB concentration, truncated silver nanocubes with sizes of 49.3 ± 4.1 nm were produced when the molar ratio of glucose/silver cation was maintained at 0.1. The sample exhibited (1 0 0), (1 1 0), and (1 1 1) planes on the facets, edges, and corners, respectively. In contrast, with a slightly larger glucose/silver cation ratio of 0.35, well-defined nanocubes with sizes of 70.9 ± 3.8 nm sizes were observed with the (1 0 0) plane on six facets. When the ratio was further increased to 1.5, excess reduction of silver cations facilitated the simultaneous formation of nanoparticles with cubic, spherical, and irregular shapes. Consistent results were obtained from transmission electron microscopy, scanning electron microscopy, X-ray diffraction, and UV–visible absorption measurements

  9. High-yield water-based synthesis of truncated silver nanocubes

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Yun-Min; Lu, I-Te; Chen, Chih-Yuan; Hsieh, Yu-Chi; Wu, Pu-Wei, E-mail: ppwu@mail.nctu.edu.tw

    2014-02-15

    Highlights: • Development of a water-based formula to fabricate truncated Ag nanocubes. • The sample exhibits (1 0 0), (1 1 0), and (1 1 1) on the facets, edges, and corners. • The sample shows three characteristic absorption peaks due to plasma resonance. -- Abstract: A high-yield water-based hydrothermal synthesis was developed using silver nitrate, ammonia, glucose, and cetyltrimethylammonium bromide (CTAB) as precursors to synthesize truncated silver nanocubes with uniform sizes and in large quantities. With a fixed CTAB concentration, truncated silver nanocubes with sizes of 49.3 ± 4.1 nm were produced when the molar ratio of glucose/silver cation was maintained at 0.1. The sample exhibited (1 0 0), (1 1 0), and (1 1 1) planes on the facets, edges, and corners, respectively. In contrast, with a slightly larger glucose/silver cation ratio of 0.35, well-defined nanocubes with sizes of 70.9 ± 3.8 nm sizes were observed with the (1 0 0) plane on six facets. When the ratio was further increased to 1.5, excess reduction of silver cations facilitated the simultaneous formation of nanoparticles with cubic, spherical, and irregular shapes. Consistent results were obtained from transmission electron microscopy, scanning electron microscopy, X-ray diffraction, and UV–visible absorption measurements.

  10. Intersection spaces, spatial homology truncation, and string theory

    CERN Document Server

    Banagl, Markus

    2010-01-01

    Intersection cohomology assigns groups which satisfy a generalized form of Poincaré duality over the rationals to a stratified singular space. The present monograph introduces a method that assigns to certain classes of stratified spaces cell complexes, called intersection spaces, whose ordinary rational homology satisfies generalized Poincaré duality. The cornerstone of the method is a process of spatial homology truncation, whose functoriality properties are analyzed in detail. The material on truncation is autonomous and may be of independent interest to homotopy theorists. The cohomology of intersection spaces is not isomorphic to intersection cohomology and possesses algebraic features such as perversity-internal cup-products and cohomology operations that are not generally available for intersection cohomology. A mirror-symmetric interpretation, as well as applications to string theory concerning massless D-branes arising in type IIB theory during a Calabi-Yau conifold transition, are discussed.

  11. The Apparent Lack of Lorentz Invariance in Zero-Point Fields with Truncated Spectra

    Directory of Open Access Journals (Sweden)

    Daywitt W. C.

    2009-01-01

    Full Text Available The integrals that describe the expectation values of the zero-point quantum-field-theoretic vacuum state are semi-infinite, as are the integrals for the stochastic electrodynamic vacuum. The unbounded upper limit to these integrals leads in turn to infinite energy densities and renormalization masses. A number of models have been put forward to truncate the integrals so that these densities and masses are finite. Unfortunately the truncation apparently destroys the Lorentz invariance of the integrals. This note argues that the integrals are naturally truncated by the graininess of the negative-energy Planck vacuum state from which the zero-point vacuum arises, and are thus automatically Lorentz invariant.

  12. Evaluation of simplified two source model for relative electron output factor of irregular block shape

    International Nuclear Information System (INIS)

    Lo, Y. E.; Yi, B. Y.; Ahn, S. D.; Kim, J. H.; Lee, S. W.; Choi, E. K.

    2002-01-01

    A practical calculation algorithm which calculates the relative output factor (ROF) for electron irregular shaped-field has been developed and evaluated the accuracy and the effectiveness of the algorithm by comparing the measurements and the calculation results for irregular fields used in clinic. The algorithm assumes that the electron dose can be express as sum of the primary source component and the scattered component from the shielding block. The primary source is assumed to have Gaussian distribution, while the scattered component follows the inverse square law. Depth and angular dependency of the primary and the scattered are ignored for maximizing the practicability by reducing the number of parameters for the algorithm. Electron dose can be calculated with three parameters such as, the effective source distance, the variance of primary source, and the scattering power of the block. The coefficients are obtained from the square shaped-block measurements and these are confirmed from the rectangular or irregular shaped-fields. The results showed less than 1.5% difference between the calculation and measurements. The algorithm is proved to be practical, since one can acquire the full parameters with minimum measurements and generates accurate results within the clinically acceptable range

  13. Improved 3-D turbomachinery CFD algorithm

    Science.gov (United States)

    Janus, J. Mark; Whitfield, David L.

    1988-01-01

    The building blocks of a computer algorithm developed for the time-accurate flow analysis of rotating machines are described. The flow model is a finite volume method utilizing a high resolution approximate Riemann solver for interface flux definitions. This block LU implicit numerical scheme possesses apparent unconditional stability. Multi-block composite gridding is used to orderly partition the field into a specified arrangement. Block interfaces, including dynamic interfaces, are treated such as to mimic interior block communication. Special attention is given to the reduction of in-core memory requirements by placing the burden on secondary storage media. Broad applicability is implied, although the results presented are restricted to that of an even blade count configuration. Several other configurations are presently under investigation, the results of which will appear in subsequent publications.

  14. Block Tridiagonal Matrices in Electronic Structure Calculations

    DEFF Research Database (Denmark)

    Petersen, Dan Erik

    in the Landauer–Büttiker ballistic transport regime. These calculations concentrate on determining the so– called Green’s function matrix, or portions thereof, which is the inverse of a block tridiagonal general complex matrix. To this end, a sequential algorithm based on Gaussian elimination named Sweeps...

  15. Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography

    DEFF Research Database (Denmark)

    Borg, Leise; Jørgensen, Jakob Sauer; Frikel, Jürgen

    2017-01-01

    Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered...... and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray...... backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance...

  16. Propagation of a general-type beam through a truncated fractional Fourier transform optical system.

    Science.gov (United States)

    Zhao, Chengliang; Cai, Yangjian

    2010-03-01

    Paraxial propagation of a general-type beam through a truncated fractional Fourier transform (FRT) optical system is investigated. Analytical formulas for the electric field and effective beam width of a general-type beam in the FRT plane are derived based on the Collins formula. Our formulas can be used to study the propagation of a variety of laser beams--such as Gaussian, cos-Gaussian, cosh-Gaussian, sine-Gaussian, sinh-Gaussian, flat-topped, Hermite-cosh-Gaussian, Hermite-sine-Gaussian, higher-order annular Gaussian, Hermite-sinh-Gaussian and Hermite-cos-Gaussian beams--through a FRT optical system with or without truncation. The propagation properties of a Hermite-cos-Gaussian beam passing through a rectangularly truncated FRT optical system are studied as a numerical example. Our results clearly show that the truncated FRT optical system provides a convenient way for laser beam shaping.

  17. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  18. Functional analysis of Rift Valley fever virus NSs encoding a partial truncation.

    Science.gov (United States)

    Head, Jennifer A; Kalveram, Birte; Ikegami, Tetsuro

    2012-01-01

    Rift Valley fever virus (RVFV), belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN)-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR). NSs self-associates at the C-terminus 17 aa., while NSs at aa.210-230 binds to Sin3A-associated protein (SAP30) to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s) can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.

  19. Functional analysis of Rift Valley fever virus NSs encoding a partial truncation.

    Directory of Open Access Journals (Sweden)

    Jennifer A Head

    Full Text Available Rift Valley fever virus (RVFV, belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR. NSs self-associates at the C-terminus 17 aa., while NSs at aa.210-230 binds to Sin3A-associated protein (SAP30 to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.

  20. Truncated Dual-Cap Nucleation Site Development

    Science.gov (United States)

    Matson, Douglas M.; Sander, Paul J.

    2012-01-01

    During heterogeneous nucleation within a metastable mushy-zone, several geometries for nucleation site development must be considered. Traditional spherical dual cap and crevice models are compared to a truncated dual cap to determine the activation energy and critical cluster growth kinetics in ternary Fe-Cr-Ni steel alloys. Results of activation energy results indicate that nucleation is more probable at grain boundaries within the solid than at the solid-liquid interface.

  1. Automatic block-matching registration to improve lung tumor localization during image-guided radiotherapy

    Science.gov (United States)

    Robertson, Scott Patrick

    To improve relatively poor outcomes for locally-advanced lung cancer patients, many current efforts are dedicated to minimizing uncertainties in radiotherapy. This enables the isotoxic delivery of escalated tumor doses, leading to better local tumor control. The current dissertation specifically addresses inter-fractional uncertainties resulting from patient setup variability. An automatic block-matching registration (BMR) algorithm is implemented and evaluated for the purpose of directly localizing advanced-stage lung tumors during image-guided radiation therapy. In this algorithm, small image sub-volumes, termed "blocks", are automatically identified on the tumor surface in an initial planning computed tomography (CT) image. Each block is independently and automatically registered to daily images acquired immediately prior to each treatment fraction. To improve the accuracy and robustness of BMR, this algorithm incorporates multi-resolution pyramid registration, regularization with a median filter, and a new multiple-candidate-registrations technique. The result of block-matching is a sparse displacement vector field that models local tissue deformations near the tumor surface. The distribution of displacement vectors is aggregated to obtain the final tumor registration, corresponding to the treatment couch shift for patient setup correction. Compared to existing rigid and deformable registration algorithms, the final BMR algorithm significantly improves the overlap between target volumes from the planning CT and registered daily images. Furthermore, BMR results in the smallest treatment margins for the given study population. However, despite these improvements, large residual target localization errors were noted, indicating that purely rigid couch shifts cannot correct for all sources of inter-fractional variability. Further reductions in treatment uncertainties may require the combination of high-quality target localization and adaptive radiotherapy.

  2. Sparse Canonical Correlation Analysis via Truncated ℓ1-norm with Application to Brain Imaging Genetics.

    Science.gov (United States)

    Du, Lei; Zhang, Tuo; Liu, Kefei; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Guo, Lei; Saykin, Andrew J; Shen, Li

    2016-01-01

    Discovering bi-multivariate associations between genetic markers and neuroimaging quantitative traits is a major task in brain imaging genetics. Sparse Canonical Correlation Analysis (SCCA) is a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants. The ℓ 0 -norm is more desirable, which however remains unexplored since the ℓ 0 -norm minimization is NP-hard. In this paper, we impose the truncated ℓ 1 -norm to improve the performance of the ℓ 1 -norm based SCCA methods. Besides, we propose two efficient optimization algorithms and prove their convergence. The experimental results, compared with two benchmark methods, show that our method identifies better and meaningful canonical loading patterns in both simulated and real imaging genetic analyse.

  3. Varying coefficient subdistribution regression for left-truncated semi-competing risks data.

    Science.gov (United States)

    Li, Ruosha; Peng, Limin

    2014-10-01

    Semi-competing risks data frequently arise in biomedical studies when time to a disease landmark event is subject to dependent censoring by death, the observation of which however is not precluded by the occurrence of the landmark event. In observational studies, the analysis of such data can be further complicated by left truncation. In this work, we study a varying co-efficient subdistribution regression model for left-truncated semi-competing risks data. Our method appropriately accounts for the specifical truncation and censoring features of the data, and moreover has the flexibility to accommodate potentially varying covariate effects. The proposed method can be easily implemented and the resulting estimators are shown to have nice asymptotic properties. We also present inference, such as Kolmogorov-Smirnov type and Cramér Von-Mises type hypothesis testing procedures for the covariate effects. Simulation studies and an application to the Denmark diabetes registry demonstrate good finite-sample performance and practical utility of the proposed method.

  4. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    Science.gov (United States)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  5. A Fast DCT Algorithm for Watermarking in Digital Signal Processor

    Directory of Open Access Journals (Sweden)

    S. E. Tsai

    2017-01-01

    Full Text Available Discrete cosine transform (DCT has been an international standard in Joint Photographic Experts Group (JPEG format to reduce the blocking effect in digital image compression. This paper proposes a fast discrete cosine transform (FDCT algorithm that utilizes the energy compactness and matrix sparseness properties in frequency domain to achieve higher computation performance. For a JPEG image of 8×8 block size in spatial domain, the algorithm decomposes the two-dimensional (2D DCT into one pair of one-dimensional (1D DCTs with transform computation in only 24 multiplications. The 2D spatial data is a linear combination of the base image obtained by the outer product of the column and row vectors of cosine functions so that inverse DCT is as efficient. Implementation of the FDCT algorithm shows that embedding a watermark image of 32 × 32 block pixel size in a 256 × 256 digital image can be completed in only 0.24 seconds and the extraction of watermark by inverse transform is within 0.21 seconds. The proposed FDCT algorithm is shown more efficient than many previous works in computation.

  6. Symmetric truncations of the shallow-water equations

    International Nuclear Information System (INIS)

    Rouhi, A.; Abarbanel, H.D.I.

    1993-01-01

    Conservation of potential vorticity in Eulerian fluids reflects particle interchange symmetry in the Lagrangian fluid version of the same theory. The algebra associated with this symmetry in the shallow-water equations is studied here, and we give a method for truncating the degrees of freedom of the theory which preserves a maximal number of invariants associated with this algebra. The finite-dimensional symmetry associated with keeping only N modes of the shallow-water flow is SU(N). In the limit where the number of modes goes to infinity (N→∞) all the conservation laws connected with potential vorticity conservation are recovered. We also present a Hamiltonian which is invariant under this truncated symmetry and which reduces to the familiar shallow-water Hamiltonian when N→∞. All this provides a finite-dimensional framework for numerical work with the shallow-water equations which preserves not only energy and enstrophy but all other known conserved quantities consistent with the finite number of degrees of freedom. The extension of these ideas to other nearly two-dimensional flows is discussed

  7. Automated synthesis and verification of configurable DRAM blocks for ASIC's

    Science.gov (United States)

    Pakkurti, M.; Eldin, A. G.; Kwatra, S. C.; Jamali, M.

    1993-01-01

    A highly flexible embedded DRAM compiler is developed which can generate DRAM blocks in the range of 256 bits to 256 Kbits. The compiler is capable of automatically verifying the functionality of the generated DRAM modules. The fully automated verification capability is a key feature that ensures the reliability of the generated blocks. The compiler's architecture, algorithms, verification techniques and the implementation methodology are presented.

  8. Astrocyte truncated-TrkB mediates BDNF antiapoptotic effect leading to neuroprotection.

    Science.gov (United States)

    Saba, Julieta; Turati, Juan; Ramírez, Delia; Carniglia, Lila; Durand, Daniela; Lasaga, Mercedes; Caruso, Carla

    2018-05-31

    Astrocytes are glial cells that help maintain brain homeostasis and become reactive in neurodegenerative processes releasing both harmful and beneficial factors. We have demonstrated that brain-derived neurotrophic factor (BDNF) expression is induced by melanocortins in astrocytes but BDNF actions in astrocytes are largely unknown. We hypothesize that BDNF may prevent astrocyte death resulting in neuroprotection. We found that BDNF increased astrocyte viability, preventing apoptosis induced by serum deprivation by decreasing active caspase-3 and p53 expression. The antiapoptotic action of BDNF was abolished by ANA-12 (a specific TrkB antagonist) and by K252a (a general Trk antagonist). Astrocytes only express the BDNF receptor TrkB truncated isoform 1, TrkB-T1. BDNF induced ERK, Akt and Src (a non-receptor tyrosine kinase) activation in astrocytes. Blocking ERK and Akt pathways abolished BDNF protection in serum deprivation-induced cell death. Moreover, BDNF protected astrocytes from death by 3-nitropropionic acid (3-NP), an effect also blocked by ANA-12, K252a, and inhibitors of ERK, calcium and Src. BDNF reduced reactive oxygen species (ROS) levels induced in astrocytes by 3-NP and increased xCT expression and glutathione levels. Astrocyte conditioned media (ACM) from untreated astrocytes partially protected PC12 neurons whereas ACM from BDNF-treated astrocytes completely protected PC12 neurons from 3-NP-induced apoptosis. Both ACM from control and BDNF-treated astrocytes markedly reduced ROS levels induced by 3-NP in PC12 cells. Our results demonstrate that BDNF protects astrocytes from cell death through TrkB-T1 signaling, exerts an antioxidant action, and induces release of neuroprotective factors from astrocytes. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  9. The irace package: Iterated racing for automatic algorithm configuration

    Directory of Open Access Journals (Sweden)

    Manuel López-Ibáñez

    2016-01-01

    Full Text Available Modern optimization algorithms typically require the setting of a large number of parameters to optimize their performance. The immediate goal of automatic algorithm configuration is to find, automatically, the best parameter settings of an optimizer. Ultimately, automatic algorithm configuration has the potential to lead to new design paradigms for optimization software. The irace package is a software package that implements a number of automatic configuration procedures. In particular, it offers iterated racing procedures, which have been used successfully to automatically configure various state-of-the-art algorithms. The iterated racing procedures implemented in irace include the iterated F-race algorithm and several extensions and improvements over it. In this paper, we describe the rationale underlying the iterated racing procedures and introduce a number of recent extensions. Among these, we introduce a restart mechanism to avoid premature convergence, the use of truncated sampling distributions to handle correctly parameter bounds, and an elitist racing procedure for ensuring that the best configurations returned are also those evaluated in the highest number of training instances. We experimentally evaluate the most recent version of irace and demonstrate with a number of example applications the use and potential of irace, in particular, and automatic algorithm configuration, in general.

  10. Autocorrelation as a source of truncated Lévy flights in foreign exchange rates

    Science.gov (United States)

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio

    2003-05-01

    We suggest that the ultraslow speed of convergence associated with truncated Lévy flights (Phys. Rev. Lett. 73 (1994) 2946) may well be explained by autocorrelations in data. We show how a particular type of autocorrelation generates power laws consistent with a truncated Lévy flight. Stock exchanges have been suggested to be modeled by a truncated Lévy flight (Nature 376 (1995) 46; Physica A 297 (2001) 509; Econom. Bull. 7 (2002) 1). Here foreign exchange rate data are taken instead. Scaling power laws in the “probability of return to the origin” are shown to emerge for most currencies. A novel approach to measure how distant a process is from a Gaussian regime is presented.

  11. A unified framework of descent algorithms for nonlinear programs and variational inequalities

    International Nuclear Information System (INIS)

    Patriksson, M.

    1993-01-01

    We present a framework of algorithms for the solution of continuous optimization and variational inequality problems. In the general algorithm, a search direction finding auxiliary problems is obtained by replacing the original cost function with an approximating monotone cost function. The proposed framework encompasses algorithm classes presented earlier by Cohen, Dafermos, Migdalas, and Tseng, and includes numerous descent and successive approximation type methods, such as Newton methods, Jacobi and Gauss-Siedel type decomposition methods for problems defined over Cartesian product sets, and proximal point methods, among others. The auxiliary problem of the general algorithm also induces equivalent optimization reformulation and descent methods for asymmetric variational inequalities. We study the convergence properties of the general algorithm when applied to unconstrained optimization, nondifferentiable optimization, constrained differentiable optimization, and variational inequalities; the emphasis of the convergence analyses is placed on basic convergence results, convergence using different line search strategies and truncated subproblem solutions, and convergence rate results. This analysis offer a unification of known results; moreover, it provides strengthenings of convergence results for many existing algorithms, and indicates possible improvements of their realizations. 482 refs

  12. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  13. Cloud Computing Security Model with Combination of Data Encryption Standard Algorithm (DES) and Least Significant Bit (LSB)

    Science.gov (United States)

    Basri, M.; Mawengkang, H.; Zamzami, E. M.

    2018-03-01

    Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.

  14. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  15. A Derandomized Algorithm for RP-ADMM with Symmetric Gauss-Seidel Method

    OpenAIRE

    Xu, Jinchao; Xu, Kailai; Ye, Yinyu

    2017-01-01

    For multi-block alternating direction method of multipliers(ADMM), where the objective function can be decomposed into multiple block components, we show that with block symmetric Gauss-Seidel iteration, the algorithm will converge quickly. The method will apply a block symmetric Gauss-Seidel iteration in the primal update and a linear correction that can be derived in view of Richard iteration. We also establish the linear convergence rate for linear systems.

  16. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao; Wang, Shiqi; Zhang, Jian; Wang, Shanshe; Ma, Siwei

    2017-01-01

    , the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure

  17. A Slicing Tree Representation and QCP-Model-Based Heuristic Algorithm for the Unequal-Area Block Facility Layout Problem

    Directory of Open Access Journals (Sweden)

    Mei-Shiang Chang

    2013-01-01

    Full Text Available The facility layout problem is a typical combinational optimization problem. In this research, a slicing tree representation and a quadratically constrained program model are combined with harmony search to develop a heuristic method for solving the unequal-area block layout problem. Because of characteristics of slicing tree structure, we propose a regional structure of harmony memory to memorize facility layout solutions and two kinds of harmony improvisation to enhance global search ability of the proposed heuristic method. The proposed harmony search based heuristic is tested on 10 well-known unequal-area facility layout problems from the literature. The results are compared with the previously best-known solutions obtained by genetic algorithm, tabu search, and ant system as well as exact methods. For problems O7, O9, vC10Ra, M11*, and Nug12, new best solutions are found. For other problems, the proposed approach can find solutions that are very similar to previous best-known solutions.

  18. An intelligent allocation algorithm for parallel processing

    Science.gov (United States)

    Carroll, Chester C.; Homaifar, Abdollah; Ananthram, Kishan G.

    1988-01-01

    The problem of allocating nodes of a program graph to processors in a parallel processing architecture is considered. The algorithm is based on critical path analysis, some allocation heuristics, and the execution granularity of nodes in a program graph. These factors, and the structure of interprocessor communication network, influence the allocation. To achieve realistic estimations of the executive durations of allocations, the algorithm considers the fact that nodes in a program graph have to communicate through varying numbers of tokens. Coarse and fine granularities have been implemented, with interprocessor token-communication duration, varying from zero up to values comparable to the execution durations of individual nodes. The effect on allocation of communication network structures is demonstrated by performing allocations for crossbar (non-blocking) and star (blocking) networks. The algorithm assumes the availability of as many processors as it needs for the optimal allocation of any program graph. Hence, the focus of allocation has been on varying token-communication durations rather than varying the number of processors. The algorithm always utilizes as many processors as necessary for the optimal allocation of any program graph, depending upon granularity and characteristics of the interprocessor communication network.

  19. Using an Augmented Lagrangian Method and block fracturing in the DDA method

    International Nuclear Information System (INIS)

    Lin, C.T.; Amadei, B.; Sture, S.

    1994-01-01

    This paper presents two extensions to the Discontinuous Deformation Analysis (DDA) method orginally proposed by Shi for modeling the response of blocky rock masses to mechanical loading. The first extension consists of improving the block contact algorithm. An Augmented Lagrangian Method is used to replace the Penalty Method orginally proposed. It allows Lagrange multipliers to be introduced without increasing the number of equations that need to be solved and thus, block contract forces can be calculated more accurately. A block fracturing capability based on a three-parameter Mohr-Coulomb criterion represents the second extension. It allows for shear or tensile fracturing of intact blocks and the formation of smaller blocks

  20. Rotating D0-branes and consistent truncations of supergravity

    International Nuclear Information System (INIS)

    Anabalón, Andrés; Ortiz, Thomas; Samtleben, Henning

    2013-01-01

    The fluctuations around the D0-brane near-horizon geometry are described by two-dimensional SO(9) gauged maximal supergravity. We work out the U(1) 4 truncation of this theory whose scalar sector consists of five dilaton and four axion fields. We construct the full non-linear Kaluza–Klein ansatz for the embedding of the dilaton sector into type IIA supergravity. This yields a consistent truncation around a geometry which is the warped product of a two-dimensional domain wall and the sphere S 8 . As an application, we consider the solutions corresponding to rotating D0-branes which in the near-horizon limit approach AdS 2 ×M 8 geometries, and discuss their thermodynamical properties. More generally, we study the appearance of such solutions in the presence of non-vanishing axion fields

  1. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman

    2012-06-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.

  2. Development of Data Processing Algorithms for the Upgraded LHCb Vertex Locator

    CERN Document Server

    AUTHOR|(CDS)2101352

    The LHCb detector will see a major upgrade during LHC Long Shutdown II, which is planned for 2019/20. The silicon Vertex Locator subdetector will be upgraded for operation under the new run conditions. The detector will be read out using a data acquisition board based on an FPGA. The work presented in this thesis is concerned with the development of the data processing algorithms to be used in this data acquisition board. In particular, work in three different areas of the FPGA is covered: the data processing block, the low level interface, and the post router block. The algorithms produced have been simulated and tested, and shown to provide the required performance. Errors in the initial implementation of the Gigabit Wireline Transmitter serialized data in the low level interface were discovered and corrected. The data scrambling algorithm and the post router block have been incorporated in the front end readout chip.

  3. A protein-truncating R179X variant in RNF186 confers protection against ulcerative colitis

    NARCIS (Netherlands)

    Rivas, Manuel A.; Graham, Daniel; Sulem, Patrick; Stevens, Christine; Desch, A. Nicole; Goyette, Philippe; Gudbjartsson, Daniel; Jonsdottir, Ingileif; Thorsteinsdottir, Unnur; Degenhardt, Frauke; Mucha, Soeren; Kurki, Mitja I.; Li, Dalin; D'Amato, Mauro; Annese, Vito; Vermeire, Severine; Weersma, Rinse K.; Halfvarson, Jonas; Paavola-Sakki, Paulina; Lappalainen, Maarit; Lek, Monkol; Cummings, Beryl; Tukiainen, Taru; Haritunians, Talin; Halme, Leena; Koskinen, Lotta L. E.; Ananthakrishnan, Ashwin N.; Luo, Yang; Heap, Graham A.; Visschedijk, Marijn C.; MacArthur, Daniel G.; Neale, Benjamin M.; Ahmad, Tariq; Anderson, Carl A.; Brant, Steven R.; Duerr, Richard H.; Silverberg, Mark S.; Cho, Judy H.; Palotie, Aarno; Saavalainen, Paivi; Kontula, Kimmo; Farkkila, Martti; McGovern, Dermot P. B.; Franke, Andre; Stefansson, Kari; Rioux, John D.; Xavier, Ramnik J.; Daly, Mark J.

    Protein-truncating variants protective against human disease provide in vivo validation of therapeutic targets. Here we used targeted sequencing to conduct a search for protein-truncating variants conferring protection against inflammatory bowel disease exploiting knowledge of common variants

  4. A min cut-set-wise truncation procedure for importance measures computation in probabilistic safety assessment

    Energy Technology Data Exchange (ETDEWEB)

    Duflot, Nicolas [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: nicolas.duflot@areva.com; Berenguer, Christophe [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: christophe.berenguer@utt.fr; Dieulle, Laurence [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: laurence.dieulle@utt.fr; Vasseur, Dominique [EPSNA Group (Nuclear PSA and Application), EDF Research and Development, 1, avenue du Gal de Gaulle, 92141 Clamart cedex (France)], E-mail: dominique.vasseur@edf.fr

    2009-11-15

    A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.

  5. A min cut-set-wise truncation procedure for importance measures computation in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Duflot, Nicolas; Berenguer, Christophe; Dieulle, Laurence; Vasseur, Dominique

    2009-01-01

    A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.

  6. Lymphoscintigraphy for sentinel lymph node detection in breast cancer: usefulness of image truncation

    International Nuclear Information System (INIS)

    Carrier, P.; Remp, H.J.; Chaborel, J.P.; Lallement, M.; Bussiere, F.; Darcourt, J.; Lallement, M.; Leblanc-Talent, P.; Machiavello, J.C.; Ettore, F.

    2004-01-01

    The sentinel lymph node (SNL) detection in breast cancer has been recently validated. It allows the reduction of the number of axillary dissections and their corresponding side effects. We tested a simple method of image truncation in order to improve the sensitivity of lymphoscintigraphy. This approach is justified by the magnitude of uptake difference between the injection site and the SNL. We prospectively investigated SNL detection using a triple method (lymphoscintigraphy, blue dye and surgical radio detection) in 130 patients. SNL was identified in 104 of the 132 patients (80%) using the standard images and in 126 of them (96, 9%) using the truncated images. Blue dye detection and surgical radio detection had a sensitivity of 76,9% and 98,5% respectively. The false negative rate was 10,3%. 288 SNL were dissected, 31 were metastatic. Among the 19 patients with metastatic SNL and more than one SNL detected, the metastatic SNL was not the hottest in 9 of them. 28 metastatic SNL were detected Y on truncated images versus only 19 on standard images. Truncation which dramatically increases the sensitivity of lymphoscintigraphy allows to increase the number of dissected SNL and probably reduces the false negative rate. (author)

  7. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  8. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    Science.gov (United States)

    Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.

    2018-01-01

    We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.

  9. Truncated forms of viral VP2 proteins fused to EGFP assemble into fluorescent parvovirus-like particles

    Directory of Open Access Journals (Sweden)

    Vuento Matti

    2006-12-01

    Full Text Available Abstract Fluorescence correlation spectroscopy (FCS monitors random movements of fluorescent molecules in solution, giving information about the number and the size of for example nano-particles. The canine parvovirus VP2 structural protein as well as N-terminal deletion mutants of VP2 (-14, -23, and -40 amino acids were fused to the C-terminus of the enhanced green fluorescent protein (EGFP. The proteins were produced in insect cells, purified, and analyzed by western blotting, confocal and electron microscopy as well as FCS. The non-truncated form, EGFP-VP2, diffused with a hydrodynamic radius of 17 nm, whereas the fluorescent mutants truncated by 14, 23 and 40 amino acids showed hydrodynamic radii of 7, 20 and 14 nm, respectively. These results show that the non-truncated EGFP-VP2 fusion protein and the EGFP-VP2 constructs truncated by 23 and by as much as 40 amino acids were able to form virus-like particles (VLPs. The fluorescent VLP, harbouring VP2 truncated by 23 amino acids, showed a somewhat larger hydrodynamic radius compared to the non-truncated EGFP-VP2. In contrast, the construct containing EGFP-VP2 truncated by 14 amino acids was not able to assemble into VLP-resembling structures. Formation of capsid structures was confirmed by confocal and electron microscopy. The number of fluorescent fusion protein molecules present within the different VLPs was determined by FCS. In conclusion, FCS provides a novel strategy to analyze virus assembly and gives valuable structural information for strategic development of parvovirus-like particles.

  10. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    International Nuclear Information System (INIS)

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan Xiaochuan

    2010-01-01

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  11. Fluorometric graphene oxide-based detection of Salmonella enteritis using a truncated DNA aptamer.

    Science.gov (United States)

    Chinnappan, Raja; AlAmer, Saleh; Eissa, Shimaa; Rahamn, Anas Abdel; Abu Salah, Khalid M; Zourob, Mohammed

    2017-12-18

    The work describes a fluorescence-based study for mapping the highest affinity truncated aptamer from the full length sequence and its integration in a graphene oxide platform for the detection of Salmonella enteriditis. To identify the best truncated sequence, molecular beacons and a displacement assay design are applied. In the fluorescence displacement assay, the truncated aptamer was hybridized with fluorescein and quencher-labeled complementary sequences to form a fluorescence/quencher pair. In the presence of S. enteritidis, the aptamer dissociates from the complementary labeled oligonucleotides and thus, the fluorescein/quencher pair becomes physically separated. This leads to an increase in fluorescence intensity. One of the truncated aptamers identified has a 2-fold lower dissociation constant (3.2 nM) compared to its full length aptamer (6.3 nM). The truncated aptamer selected in this process was used to develop a fluorometric graphene oxide (GO) based assay. If fluorescein-labeled aptamer is adsorbed on GO via π stacking interaction, fluorescence is quenched. However, in the presence of target (S. enteriditis), the labeled aptamers is released from surface to form a stable complex with the bacteria and fluorescence is restored, depending on the quantity of bacteria being present. The resulting assay has an unsurpassed detection limit of 25 cfu·mL -1 in the best case. The cross reactivity to Salmonella typhimurium, Staphylococcus aureus and Escherichia coli is negligible. The assay was applied to analyze doped milk samples for and gave good recovery. Thus, we believe that the truncated aptamer/graphene oxide platform is a potential tool for the detection of S. Enteritidis. Graphical abstract Fluorescently labelled aptamer against Salmonella enteritidis was adsorbed on the surface of graphene oxide by π-stacking interaction. This results in quenching of the fluorescence of the label. Addition of Salmonella enteritidis restores fluorescence, and this

  12. ANN Synthesis Model of Single-Feed Corner-Truncated Circularly Polarized Microstrip Antenna with an Air Gap for Wideband Applications

    Directory of Open Access Journals (Sweden)

    Zhongbao Wang

    2014-01-01

    Full Text Available A computer-aided design model based on the artificial neural network (ANN is proposed to directly obtain patch physical dimensions of the single-feed corner-truncated circularly polarized microstrip antenna (CPMA with an air gap for wideband applications. To take account of the effect of the air gap, an equivalent relative permittivity is introduced and adopted to calculate the resonant frequency and Q-factor of square microstrip antennas for obtaining the training data sets. ANN architectures using multilayered perceptrons (MLPs and radial basis function networks (RBFNs are compared. Also, six learning algorithms are used to train the MLPs for comparison. It is found that MLPs trained with the Levenberg-Marquardt (LM algorithm are better than RBFNs for the synthesis of the CPMA. An accurate model is achieved by using an MLP with three hidden layers. The model is validated by the electromagnetic simulation and measurements. It is enormously useful to antenna engineers for facilitating the design of the single-feed CPMA with an air gap.

  13. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Ilow Jacek

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of information packets to construct redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  14. A novel block cryptosystem based on iterating a chaotic map

    International Nuclear Information System (INIS)

    Xiang Tao; Liao Xiaofeng; Tang Guoping; Chen Yong; Wong, Kwok-wo

    2006-01-01

    A block cryptographic scheme based on iterating a chaotic map is proposed. With random binary sequences generated from the real-valued chaotic map, the plaintext block is permuted by a key-dependent shift approach and then encrypted by the classical chaotic masking technique. Simulation results show that performance and security of the proposed cryptographic scheme are better than those of existing algorithms. Advantages and security of our scheme are also discussed in detail

  15. A parallel attractor-finding algorithm based on Boolean satisfiability for genetic regulatory networks.

    Directory of Open Access Journals (Sweden)

    Wensheng Guo

    Full Text Available In biological systems, the dynamic analysis method has gained increasing attention in the past decade. The Boolean network is the most common model of a genetic regulatory network. The interactions of activation and inhibition in the genetic regulatory network are modeled as a set of functions of the Boolean network, while the state transitions in the Boolean network reflect the dynamic property of a genetic regulatory network. A difficult problem for state transition analysis is the finding of attractors. In this paper, we modeled the genetic regulatory network as a Boolean network and proposed a solving algorithm to tackle the attractor finding problem. In the proposed algorithm, we partitioned the Boolean network into several blocks consisting of the strongly connected components according to their gradients, and defined the connection between blocks as decision node. Based on the solutions calculated on the decision nodes and using a satisfiability solving algorithm, we identified the attractors in the state transition graph of each block. The proposed algorithm is benchmarked on a variety of genetic regulatory networks. Compared with existing algorithms, it achieved similar performance on small test cases, and outperformed it on larger and more complex ones, which happens to be the trend of the modern genetic regulatory network. Furthermore, while the existing satisfiability-based algorithms cannot be parallelized due to their inherent algorithm design, the proposed algorithm exhibits a good scalability on parallel computing architectures.

  16. Column generation algorithms for virtual network embedding in flexi-grid optical networks.

    Science.gov (United States)

    Lin, Rongping; Luo, Shan; Zhou, Jingwei; Wang, Sheng; Chen, Bin; Zhang, Xiaoning; Cai, Anliang; Zhong, Wen-De; Zukerman, Moshe

    2018-04-16

    Network virtualization provides means for efficient management of network resources by embedding multiple virtual networks (VNs) to share efficiently the same substrate network. Such virtual network embedding (VNE) gives rise to a challenging problem of how to optimize resource allocation to VNs and to guarantee their performance requirements. In this paper, we provide VNE algorithms for efficient management of flexi-grid optical networks. We provide an exact algorithm aiming to minimize the total embedding cost in terms of spectrum cost and computation cost for a single VN request. Then, to achieve scalability, we also develop a heuristic algorithm for the same problem. We apply these two algorithms for a dynamic traffic scenario where many VN requests arrive one-by-one. We first demonstrate by simulations for the case of a six-node network that the heuristic algorithm obtains very close blocking probabilities to exact algorithm (about 0.2% higher). Then, for a network of realistic size (namely, USnet) we demonstrate that the blocking probability of our new heuristic algorithm is about one magnitude lower than a simpler heuristic algorithm, which was a component of an earlier published algorithm.

  17. A Self-embedding Robust Digital Watermarking Algorithm with Blind Detection

    Directory of Open Access Journals (Sweden)

    Gong Yunfeng

    2014-08-01

    Full Text Available In order to achieve the perfectly blind detection of robustness watermarking algorithm, a novel self-embedding robust digital watermarking algorithm with blind detection is proposed in this paper. Firstly the original image is divided to not overlap image blocks and then decomposable coefficients are obtained by lifting-based wavelet transform in every image blocks. Secondly the low-frequency coefficients of block images are selected and then approximately represented as a product of a base matrix and a coefficient matrix using NMF. Then the feature vector represent original image is obtained by quantizing coefficient matrix, and finally the adaptive quantization of the robustness watermark is embedded in the low-frequency coefficients of LWT. Experimental results show that the scheme is robust against common signal processing attacks, meanwhile perfect blind detection is achieve.

  18. Analytic Method for Pressure Recovery in Truncated Diffusers ...

    African Journals Online (AJOL)

    A prediction method is presented for the static pressure recovery in subsonic axisymmetric truncated conical diffusers. In the analysis, a turbulent boundary layer is assumed at the diffuser inlet and a potential core exists throughout the flow. When flow separation occurs, this approach cannot be used to predict the maximum ...

  19. Rotating D0-branes and consistent truncations of supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Anabalón, Andrés [Departamento de Ciencias, Facultad de Artes Liberales, Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Av. Padre Hurtado 750, Viña del Mar (Chile); Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS École Normale Supérieure de Lyon 46, allée d' Italie, F-69364 Lyon cedex 07 (France); Ortiz, Thomas; Samtleben, Henning [Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS École Normale Supérieure de Lyon 46, allée d' Italie, F-69364 Lyon cedex 07 (France)

    2013-12-18

    The fluctuations around the D0-brane near-horizon geometry are described by two-dimensional SO(9) gauged maximal supergravity. We work out the U(1){sup 4} truncation of this theory whose scalar sector consists of five dilaton and four axion fields. We construct the full non-linear Kaluza–Klein ansatz for the embedding of the dilaton sector into type IIA supergravity. This yields a consistent truncation around a geometry which is the warped product of a two-dimensional domain wall and the sphere S{sup 8}. As an application, we consider the solutions corresponding to rotating D0-branes which in the near-horizon limit approach AdS{sub 2}×M{sub 8} geometries, and discuss their thermodynamical properties. More generally, we study the appearance of such solutions in the presence of non-vanishing axion fields.

  20. Cipher block based authentication module: A hardware design perspective

    NARCIS (Netherlands)

    Michail, H.E.; Schinianakis, D.; Goutis, C.E.; Kakarountas, A.P.; Selimis, G.

    2011-01-01

    Message Authentication Codes (MACs) are widely used in order to authenticate data packets, which are transmitted thought networks. Typically MACs are implemented using modules like hash functions and in conjunction with encryption algorithms (like Block Ciphers), which are used to encrypt the

  1. Generation of truncated recombinant form of tumor necrosis factor ...

    African Journals Online (AJOL)

    7. Original Research Article. Generation of truncated recombinant form of tumor necrosis factor ... as 6×His tagged using E.coli BL21 (DE3) expression system. The protein was ... proapoptotic signaling cascade through TNFR1. [5] which is ...

  2. A new block cipher based on chaotic map and group theory

    International Nuclear Information System (INIS)

    Yang Huaqian; Liao Xiaofeng; Wong Kwokwo; Zhang Wei; Wei Pengcheng

    2009-01-01

    Based on the study of some existing chaotic encryption algorithms, a new block cipher is proposed. In the proposed cipher, two sequences of decimal numbers individually generated by two chaotic piecewise linear maps are used to determine the noise vectors by comparing the element of the two sequences. Then a sequence of decimal numbers is used to define a bijection map. The modular multiplication operation in the group Z 2 8 +1 * and permutations are alternately applied on plaintext with block length of multiples of 64 bits to produce ciphertext blocks of the same length. Analysis show that the proposed block cipher does not suffer from the flaws of pure chaotic cryptosystems.

  3. Zlib: A numerical library for optimal design of truncated power series algebra and map parameterization routines

    International Nuclear Information System (INIS)

    Yan, Y.T.

    1996-11-01

    A brief review of the Zlib development is given. Emphasized is the Zlib nerve system which uses the One-Step Index Pointers (OSIPs) for efficient computation and flexible use of the Truncated Power Series Algebra (TPSA). Also emphasized is the treatment of parameterized maps with an object-oriented language (e.g. C++). A parameterized map can be a Vector Power Series (Vps) or a Lie generator represented by an exponent of a Truncated Power Series (Tps) of which each coefficient is an object of truncated power series

  4. A review on "A Novel Technique for Image Steganography Based on Block-DCT and Huffman Encoding"

    Science.gov (United States)

    Das, Rig; Tuithung, Themrichon

    2013-03-01

    This paper reviews the embedding and extraction algorithm proposed by "A. Nag, S. Biswas, D. Sarkar and P. P. Sarkar" on "A Novel Technique for Image Steganography based on Block-DCT and Huffman Encoding" in "International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010" [3] and shows that the Extraction of Secret Image is Not Possible for the algorithm proposed in [3]. 8 bit Cover Image of size is divided into non joint blocks and a two dimensional Discrete Cosine Transformation (2-D DCT) is performed on each of the blocks. Huffman Encoding is performed on an 8 bit Secret Image of size and each bit of the Huffman Encoded Bit Stream is embedded in the frequency domain by altering the LSB of the DCT coefficients of Cover Image blocks. The Huffman Encoded Bit Stream and Huffman Table

  5. Measuring a truncated disk in Aquila X-1

    DEFF Research Database (Denmark)

    King, Ashley L.; Tomsick, John A.; Miller, Jon M.

    2016-01-01

    We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe Kα line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner r...

  6. A scalable community detection algorithm for large graphs using stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-11-24

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of

  7. A scalable community detection algorithm for large graphs using stochastic block models

    KAUST Repository

    Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.

    2017-01-01

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of

  8. Performance of multi-service system with retrials due to blocking and called-party-busy

    DEFF Research Database (Denmark)

    Stepanov, S.N.; Kokina, O.A.; Iversen, Villy Bæk

    2008-01-01

    In this paper we construct a model of a multi-service system with an arbitrary number of bandwidth flow demands, taking into account retrials due to both blocking along the route and to called-party-busy. An approximate algorithm for estimation of key performance measures is proposed, and the pro......In this paper we construct a model of a multi-service system with an arbitrary number of bandwidth flow demands, taking into account retrials due to both blocking along the route and to called-party-busy. An approximate algorithm for estimation of key performance measures is proposed...

  9. A Bee Colony Optimization Approach for Mixed Blocking Constraints Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Mostafa Khorramizadeh

    2015-01-01

    Full Text Available The flow shop scheduling problems with mixed blocking constraints with minimization of makespan are investigated. The Taguchi orthogonal arrays and path relinking along with some efficient local search methods are used to develop a metaheuristic algorithm based on bee colony optimization. In order to compare the performance of the proposed algorithm, two well-known test problems are considered. Computational results show that the presented algorithm has comparative performance with well-known algorithms of the literature, especially for the large sized problems.

  10. Exploring atmospheric blocking with GPS radio occultation observations

    Directory of Open Access Journals (Sweden)

    L. Brunner

    2016-04-01

    Full Text Available Atmospheric blocking has been closely investigated in recent years due to its impact on weather and climate, such as heat waves, droughts, and flooding. We use, for the first time, satellite-based observations from Global Positioning System (GPS radio occultation (RO and explore their ability to resolve blocking in order to potentially open up new avenues complementing models and reanalyses. RO delivers globally available and vertically highly resolved profiles of atmospheric variables such as temperature and geopotential height (GPH. Applying a standard blocking detection algorithm, we find that RO data robustly capture blocking as demonstrated for two well-known blocking events over Russia in summer 2010 and over Greenland in late winter 2013. During blocking episodes, vertically resolved GPH gradients show a distinct anomalous behavior compared to climatological conditions up to 300 hPa and sometimes even further up into the tropopause. The accompanying increase in GPH of up to 300 m in the upper troposphere yields a pronounced tropopause height increase. Corresponding temperatures rise up to 10 K in the middle and lower troposphere. These results demonstrate the feasibility and potential of RO to detect and resolve blocking and in particular to explore the vertical structure of the atmosphere during blocking episodes. This new observation-based view is available globally at the same quality so that blocking in the Southern Hemisphere can also be studied with the same reliability as in the Northern Hemisphere.

  11. Causal analysis of ordinal treatments and binary outcomes under truncation by death.

    Science.gov (United States)

    Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua

    2017-06-01

    It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.

  12. Fusion events lead to truncation of FOS in epithelioid hemangioma of bone

    DEFF Research Database (Denmark)

    van IJzendoorn, David G P; de Jong, Danielle; Romagosa, Cleofe

    2015-01-01

    in exon 4 of the FOS gene and the fusion event led to the introduction of a stop codon. In all instances, the truncation of the FOS gene would result in the loss of the transactivation domain (TAD). Using FISH probes we found a break in the FOS gene in two additional cases, in none of these cases...... differential diagnosis of vascular tumors of bone. Our data suggest that the translocation causes truncation of the FOS protein, with loss of the TAD, which is thereby a novel mechanism involved in tumorigenesis....

  13. Full cycle rapid scan EPR deconvolution algorithm.

    Science.gov (United States)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan

  14. Automatic registration of remote sensing images based on SIFT and fuzzy block matching for change detection

    Directory of Open Access Journals (Sweden)

    Cai Guo-Rong

    2011-10-01

    Full Text Available This paper presents an automated image registration approach to detecting changes in multi-temporal remote sensing images. The proposed algorithm is based on the scale invariant feature transform (SIFT and has two phases. The first phase focuses on SIFT feature extraction and on estimation of image transformation. In the second phase, Structured Local Binary Haar Pattern (SLBHP combined with a fuzzy similarity measure is then used to build a new and effective block similarity measure for change detection. Experimental results obtained on multi-temporal data sets show that compared with three mainstream block matching algorithms, the proposed algorithm is more effective in dealing with scale, rotation and illumination changes.

  15. The truncated Wigner method for Bose-condensed gases: limits of validity and applications

    International Nuclear Information System (INIS)

    Sinatra, Alice; Lobo, Carlos; Castin, Yvan

    2002-01-01

    We study the truncated Wigner method applied to a weakly interacting spinless Bose-condensed gas which is perturbed away from thermal equilibrium by a time-dependent external potential. The principle of the method is to generate an ensemble of classical fields ψ(r) which samples the Wigner quasi-distribution function of the initial thermal equilibrium density operator of the gas, and then to evolve each classical field with the Gross-Pitaevskii equation. In the first part of the paper we improve the sampling technique over our previous work (Sinatra et al 2000 J. Mod. Opt. 47 2629-44) and we test its accuracy against the exactly solvable model of the ideal Bose gas. In the second part of the paper we investigate the conditions of validity of the truncated Wigner method. For short evolution times it is known that the time-dependent Bogoliubov approximation is valid for almost pure condensates. The requirement that the truncated Wigner method reproduces the Bogoliubov prediction leads to the constraint that the number of field modes in the Wigner simulation must be smaller than the number of particles in the gas. For longer evolution times the nonlinear dynamics of the noncondensed modes of the field plays an important role. To demonstrate this we analyse the case of a three-dimensional spatially homogeneous Bose-condensed gas and we test the ability of the truncated Wigner method to correctly reproduce the Beliaev-Landau damping of an excitation of the condensate. We have identified the mechanism which limits the validity of the truncated Wigner method: the initial ensemble of classical fields, driven by the time-dependent Gross-Pitaevskii equation, thermalizes to a classical field distribution at a temperature T class which is larger than the initial temperature T of the quantum gas. When T class significantly exceeds T a spurious damping is observed in the Wigner simulation. This leads to the second validity condition for the truncated Wigner method, T class - T

  16. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  17. A SUZAKU OBSERVATION OF NGC 4593: ILLUMINATING THE TRUNCATED DISK

    International Nuclear Information System (INIS)

    Markowitz, A. G.; Reeves, J. N.

    2009-01-01

    We report results from a 2007 Suzaku observation of the Seyfert 1 AGN NGC 4593. The narrow Fe Kα emission line has a FWHM width ∼ 4000 km s -1 , indicating emission from ∼> 5000 R g . There is no evidence for a relativistically broadened Fe K line, consistent with the presence of a radiatively efficient outer disk which is truncated or transitions to an interior radiatively inefficient flow. The Suzaku observation caught the source in a low-flux state; comparison to a 2002 XMM-Newton observation indicates that the hard X-ray flux decreased by 3.6, while the Fe Kα line intensity and width σ each roughly halved. Two model-dependent explanations for the changes in Fe Kα line profile are explored. In one, the Fe Kα line width has decreased from ∼10,000 to ∼4000 km s -1 from 2002 to 2007, suggesting that the thin disk truncation/transition radius has increased from 1000-2000 to ∼>5000 R g . However, there are indications from other compact accreting systems that such truncation radii tend to be associated only with accretion rates relative to Eddington much lower than that of NGC 4593. In the second model, the line profile in the XMM-Newton observation consists of a time-invariant narrow component plus a broad component originating from the inner part of the truncated disk (∼300 R g ) which has responded to the drop in continuum flux. The Compton reflection component strength R is ∼ 1.1, consistent with the measured Fe Kα line total equivalent width with an Fe abundance 1.7 times the solar value. The modest soft excess, modeled well by either thermal bremsstrahlung emission or by Comptonization of soft seed photons in an optical thin plasma, has fallen by a factor of ∼20 from 2002 to 2007, ruling out emission from a region 5 lt-yr in size.

  18. Observation of the dispersion of wedge waves propagating along cylinder wedge with different truncations by laser ultrasound technique

    Science.gov (United States)

    Jia, Jing; Zhang, Yu; Han, Qingbang; Jing, Xueping

    2017-10-01

    The research focuses on study the influence of truncations on the dispersion of wedge waves propagating along cylinder wedge with different truncations by using the laser ultrasound technique. The wedge waveguide models with different truncations were built by using finite element method (FEM). The dispersion curves were obtained by using 2D Fourier transformation method. Multiple mode wedge waves were observed, which was well agreed with the results estimated from Lagasse's empirical formula. We established cylinder wedge with radius of 3mm, 20° and 60°angle, with 0μm, 5μm, 10μm, 20μm, 30μm, 40μm, and 50μm truncations, respectively. It was found that non-ideal wedge tip caused abnormal dispersion of the mode of cylinder wedge, the modes of 20° cylinder wedge presents the characteristics of guide waves which propagating along hollow cylinder as the truncation increasing. Meanwhile, the modes of 60° cylinder wedge with truncations appears the characteristics of guide waves propagating along hollow cylinder, and its mode are observed clearly. The study can be used to evaluate and detect wedge structure.

  19. On the Truncated Pareto Distribution with applications

    OpenAIRE

    Zaninetti, Lorenzo; Ferraro, Mario

    2008-01-01

    The Pareto probability distribution is widely applied in different fields such us finance, physics, hydrology, geology and astronomy. This note deals with an application of the Pareto distribution to astrophysics and more precisely to the statistical analysis of mass of stars and of diameters of asteroids. In particular a comparison between the usual Pareto distribution and its truncated version is presented. Finally a possible physical mechanism that produces Pareto tails for the distributio...

  20. On the propagation of truncated localized waves in dispersive silica

    KAUST Repository

    Salem, Mohamed; Bagci, Hakan

    2010-01-01

    Propagation characteristics of truncated Localized Waves propagating in dispersive silica and free space are numerically analyzed. It is shown that those characteristics are affected by the changes in the relation between the transverse spatial

  1. Solving the Telegraph and Oscillatory Differential Equations by a Block Hybrid Trigonometrically Fitted Algorithm

    Directory of Open Access Journals (Sweden)

    F. F. Ngwane

    2015-01-01

    Full Text Available We propose a block hybrid trigonometrically fitted (BHT method, whose coefficients are functions of the frequency and the step-size for directly solving general second-order initial value problems (IVPs, including systems arising from the semidiscretization of hyperbolic Partial Differential Equations (PDEs, such as the Telegraph equation. The BHT is formulated from eight discrete hybrid formulas which are provided by a continuous two-step hybrid trigonometrically fitted method with two off-grid points. The BHT is implemented in a block-by-block fashion; in this way, the method does not suffer from the disadvantages of requiring starting values and predictors which are inherent in predictor-corrector methods. The stability property of the BHT is discussed and the performance of the method is demonstrated on some numerical examples to show accuracy and efficiency advantages.

  2. Digestion proteomics: tracking lactoferrin truncation and peptide release during simulated gastric digestion.

    Science.gov (United States)

    Grosvenor, Anita J; Haigh, Brendan J; Dyer, Jolon M

    2014-11-01

    The extent to which nutritional and functional benefit is derived from proteins in food is related to its breakdown and digestion in the body after consumption. Further, detailed information about food protein truncation during digestion is critical to understanding and optimising the availability of bioactives, in controlling and limiting allergen release, and in minimising or monitoring the effects of processing and food preparation. However, tracking the complex array of products formed during the digestion of proteins is not easily accomplished using classical proteomics. We here present and develop a novel proteomic approach using isobaric labelling to mapping and tracking protein truncation and peptide release during simulated gastric digestion, using bovine lactoferrin as a model food protein. The relative abundance of related peptides was tracked throughout a digestion time course, and the effect of pasteurisation on peptide release assessed. The new approach to food digestion proteomics developed here therefore appears to be highly suitable not only for tracking the truncation and relative abundance of released peptides during gastric digestion, but also for determining the effects of protein modification on digestibility and potential bioavailability.

  3. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Jacek Ilow

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of k information packets to construct r redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of k information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of n=k+r received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  4. Versatility of the CFR algorithm for limited angle reconstruction

    International Nuclear Information System (INIS)

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.

    1990-01-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant

  5. The Most Developmentally Truncated Fishes Show Extensive Hox Gene Loss and Miniaturized Genomes

    Science.gov (United States)

    Malmstrøm, Martin; Britz, Ralf; Matschiner, Michael; Tørresen, Ole K; Hadiaty, Renny Kurnia; Yaakob, Norsham; Tan, Heok Hui; Jakobsen, Kjetill Sigurd; Salzburger, Walter; Rüber, Lukas

    2018-01-01

    Abstract The world’s smallest fishes belong to the genus Paedocypris. These miniature fishes are endemic to an extreme habitat: the peat swamp forests in Southeast Asia, characterized by highly acidic blackwater. This threatened habitat is home to a large array of fishes, including a number of miniaturized but also developmentally truncated species. Especially the genus Paedocypris is characterized by profound, organism-wide developmental truncation, resulting in sexually mature individuals of <8 mm in length with a larval phenotype. Here, we report on evolutionary simplification in the genomes of two species of the dwarf minnow genus Paedocypris using whole-genome sequencing. The two species feature unprecedented Hox gene loss and genome reduction in association with their massive developmental truncation. We also show how other genes involved in the development of musculature, nervous system, and skeleton have been lost in Paedocypris, mirroring its highly progenetic phenotype. Further, our analyses suggest two mechanisms responsible for the genome streamlining in Paedocypris in relation to other Cypriniformes: severe intron shortening and reduced repeat content. As the first report on the genomic sequence of a vertebrate species with organism-wide developmental truncation, the results of our work enhance our understanding of genome evolution and how genotypes are translated to phenotypes. In addition, as a naturally simplified system closely related to zebrafish, Paedocypris provides novel insights into vertebrate development. PMID:29684203

  6. The Most Developmentally Truncated Fishes Show Extensive Hox Gene Loss and Miniaturized Genomes.

    Science.gov (United States)

    Malmstrøm, Martin; Britz, Ralf; Matschiner, Michael; Tørresen, Ole K; Hadiaty, Renny Kurnia; Yaakob, Norsham; Tan, Heok Hui; Jakobsen, Kjetill Sigurd; Salzburger, Walter; Rüber, Lukas

    2018-04-01

    The world's smallest fishes belong to the genus Paedocypris. These miniature fishes are endemic to an extreme habitat: the peat swamp forests in Southeast Asia, characterized by highly acidic blackwater. This threatened habitat is home to a large array of fishes, including a number of miniaturized but also developmentally truncated species. Especially the genus Paedocypris is characterized by profound, organism-wide developmental truncation, resulting in sexually mature individuals of <8 mm in length with a larval phenotype. Here, we report on evolutionary simplification in the genomes of two species of the dwarf minnow genus Paedocypris using whole-genome sequencing. The two species feature unprecedented Hox gene loss and genome reduction in association with their massive developmental truncation. We also show how other genes involved in the development of musculature, nervous system, and skeleton have been lost in Paedocypris, mirroring its highly progenetic phenotype. Further, our analyses suggest two mechanisms responsible for the genome streamlining in Paedocypris in relation to other Cypriniformes: severe intron shortening and reduced repeat content. As the first report on the genomic sequence of a vertebrate species with organism-wide developmental truncation, the results of our work enhance our understanding of genome evolution and how genotypes are translated to phenotypes. In addition, as a naturally simplified system closely related to zebrafish, Paedocypris provides novel insights into vertebrate development.

  7. Identification of a functionally distinct truncated BDNF mRNA splice variant and protein in Trachemys scripta elegans.

    Directory of Open Access Journals (Sweden)

    Ganesh Ambigapathy

    Full Text Available Brain-derived neurotrophic factor (BDNF has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein.

  8. Identification of a functionally distinct truncated BDNF mRNA splice variant and protein in Trachemys scripta elegans.

    Science.gov (United States)

    Ambigapathy, Ganesh; Zheng, Zhaoqing; Li, Wei; Keifer, Joyce

    2013-01-01

    Brain-derived neurotrophic factor (BDNF) has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF) are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein.

  9. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    Science.gov (United States)

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  10. Theoretical analysis of balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2012-01-01

    In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians and their singu......In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians...... for showing this independence is realization theory of linear switched systems. [1] H. R. Shaker and R. Wisniewski, "Generalized gramian framework for model/controller order reduction of switched systems", International Journal of Systems Science, Vol. 42, Issue 8, 2011, 1277-1291. [2] H. R. Shaker and R....... Wisniewski, "Switched Systems Reduction Framework Based on Convex Combination of Generalized Gramians", Journal of Control Science and Engineering, 2009....

  11. Propagation of coherently combined truncated laser beam arrays with beam distortions in non-Kolmogorov turbulence.

    Science.gov (United States)

    Tao, Rumao; Si, Lei; Ma, Yanxing; Zhou, Pu; Liu, Zejin

    2012-08-10

    The propagation properties of coherently combined truncated laser beam arrays with beam distortions through non-Kolmogorov turbulence are studied in detail both analytically and numerically. The analytical expressions for the average intensity and the beam width of coherently combined truncated laser beam arrays with beam distortions propagating through turbulence are derived based on the combination of statistical optics methods and the extended Huygens-Fresnel principle. The effect of beam distortions, such as amplitude modulation and phase fluctuation, is studied by numerical examples. The numerical results reveal that phase fluctuations have significant influence on the spreading of coherently combined truncated laser beam arrays in non-Kolmogorov turbulence, and the effects of the phase fluctuations can be negligible as long as the phase fluctuations are controlled under a certain level, i.e., a>0.05 for the situation considered in the paper. Furthermore, large phase fluctuations can convert the beam distribution rapidly to a Gaussian form, vary the spreading, weaken the optimum truncation effects, and suppress the dependence of spreading on the parameters of the non-Kolmogorov turbulence.

  12. Analog Circuit Design Optimization Based on Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Mansour Barari

    2014-01-01

    Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.

  13. Solving Schwinger-Dyson equations by truncation in zero-dimensional scalar quantum field theory

    International Nuclear Information System (INIS)

    Okopinska, A.

    1991-01-01

    Three sets of Schwinger-Dyson equations, for all Green's functions, for connected Green's functions, and for proper vertices, are considered in scalar quantum field theory. A truncation scheme applied to the three sets gives three different approximation series for Green's functions. For the theory in zero-dimensional space-time the results for respective two-point Green's functions are compared with the exact value calculated numerically. The best convergence of the truncation scheme is obtained for the case of proper vertices

  14. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  15. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    Science.gov (United States)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  16. PDES, Fips Standard Data Encryption Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Nessett, D N [Lawrence Livermore National Laboratory (United States)

    1991-03-26

    Description of program or function: PDES performs the National Bureau of Standards FIPS Pub. 46 data encryption/decryption algorithm used for the cryptographic protection of computer data. The DES algorithm is designed to encipher and decipher blocks of data consisting of 64 bits under control of a 64-bit key. The key is generated in such a way that each of the 56 bits used directly by the algorithm are random and the remaining 8 error-detecting bits are set to make the parity of each 8-bit byte of the key odd, i. e. there is an odd number of '1' bits in each 8-bit byte. Each member of a group of authorized users of encrypted computer data must have the key that was used to encipher the data in order to use it. Data can be recovered from cipher only by using exactly the same key used to encipher it, but with the schedule of addressing the key bits altered so that the deciphering process is the reverse of the enciphering process. A block of data to be enciphered is subjected to an initial permutation, then to a complex key-dependent computation, and finally to a permutation which is the inverse of the initial permutation. Two PDES routines are included; both perform the same calculation. One, identified as FDES.MAR, is designed to achieve speed in execution, while the other identified as PDES.MAR, presents a clearer view of how the algorithm is executed

  17. PDES, Fips Standard Data Encryption Algorithm

    International Nuclear Information System (INIS)

    Nessett, D.N.

    1991-01-01

    Description of program or function: PDES performs the National Bureau of Standards FIPS Pub. 46 data encryption/decryption algorithm used for the cryptographic protection of computer data. The DES algorithm is designed to encipher and decipher blocks of data consisting of 64 bits under control of a 64-bit key. The key is generated in such a way that each of the 56 bits used directly by the algorithm are random and the remaining 8 error-detecting bits are set to make the parity of each 8-bit byte of the key odd, i. e. there is an odd number of '1' bits in each 8-bit byte. Each member of a group of authorized users of encrypted computer data must have the key that was used to encipher the data in order to use it. Data can be recovered from cipher only by using exactly the same key used to encipher it, but with the schedule of addressing the key bits altered so that the deciphering process is the reverse of the enciphering process. A block of data to be enciphered is subjected to an initial permutation, then to a complex key-dependent computation, and finally to a permutation which is the inverse of the initial permutation. Two PDES routines are included; both perform the same calculation. One, identified as FDES.MAR, is designed to achieve speed in execution, while the other identified as PDES.MAR, presents a clearer view of how the algorithm is executed

  18. Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding.

    Science.gov (United States)

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2014-09-22

    We present a novel image hiding method based on phase retrieval algorithm under the framework of nonlinear double random phase encoding in fractional Fourier domain. Two phase-only masks (POMs) are efficiently determined by using the phase retrieval algorithm, in which two cascaded phase-truncated fractional Fourier transforms (FrFTs) are involved. No undesired information disclosure, post-processing of the POMs or digital inverse computation appears in our proposed method. In order to achieve the reduction in key transmission, a modified image hiding method based on the modified phase retrieval algorithm and logistic map is further proposed in this paper, in which the fractional orders and the parameters with respect to the logistic map are regarded as encryption keys. Numerical results have demonstrated the feasibility and effectiveness of the proposed algorithms.

  19. Cross-layer combining of adaptive modulation and truncated ARQ under cognitive radio resource requirements

    KAUST Repository

    Yang, Yuli; Ma, Hao; Aï ssa, Sonia

    2012-01-01

    In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU's packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU's packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.

  20. Optimal auxiliary Hamiltonians for truncated boson-space calculations by means of a maximal-decoupling variational principle

    International Nuclear Information System (INIS)

    Li, C.

    1991-01-01

    A new method based on a maximal-decoupling variational principle is proposed to treat the Pauli-principle constraints for calculations of nuclear collective motion in a truncated boson space. The viability of the method is demonstrated through an application to the multipole form of boson Hamiltonians for the single-j and nondegenerate multi-j pairing interactions. While these boson Hamiltonians are Hermitian and contain only one- and two-boson terms, they are also the worst case for truncated boson-space calculations because they are not amenable to any boson truncations at all. By using auxiliary Hamiltonians optimally determined by the maximal-decoupling variational principle, however, truncations in the boson space become feasible and even yield reasonably accurate results. The method proposed here may thus be useful for doing realistic calculations of nuclear collective motion as well as for obtaining a viable interacting-boson-model type of boson Hamiltonian from the shell model

  1. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces

    Science.gov (United States)

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-01

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  2. Learning Mixtures of Truncated Basis Functions from Data

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Pérez-Bernabé, Inmaculada

    2014-01-01

    In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a ke...... propose an alternative learning method that relies on the cumulative distribution function of the data. Empirical results demonstrate the usefulness of the approaches: Even though the methods produce estimators that are slightly poorer than the state of the art (in terms of log......In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing......-likelihood), they are significantly faster, and therefore indicate that the MoTBF framework can be used for inference and learning in reasonably sized domains. Furthermore, we show how a particular sub- class of MoTBF potentials (learnable by the proposed methods) can be exploited to significantly reduce complexity during inference....

  3. Determination of αS from scaling violations of truncated moments of structure functions

    International Nuclear Information System (INIS)

    Forte, Stefano; Latorre, J.I.; Magnea, Lorenzo; Piccione, Andrea

    2002-01-01

    We determine the strong coupling α S (M Z ) from scaling violations of truncated moments of the nonsinglet deep inelastic structure function F 2 . Truncated moments are determined from BCDMS and NMC data using a neural network parametrization which retains the full experimental information on errors and correlations. Our method minimizes all sources of theoretical uncertainty and bias which characterize extractions of α S from scaling violations. We obtain α S (M Z )=0.124 +0.004 -0.007 (exp.) +0.003 -0.004 (th.)

  4. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm

    Directory of Open Access Journals (Sweden)

    Man Zhang

    2017-10-01

    Full Text Available Precise azimuth-variant motion compensation (MOCO is an essential and difficult task for high-resolution synthetic aperture radar (SAR imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA, have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.

  5. Scavenger receptor AI/II truncation, lung function and COPD

    DEFF Research Database (Denmark)

    Thomsen, M; Nordestgaard, B G; Tybjaerg-Hansen, A

    2011-01-01

    The scavenger receptor A-I/II (SRA-I/II) on alveolar macrophages is involved in recognition and clearance of modified lipids and inhaled particulates. A rare variant of the SRA-I/II gene, Arg293X, truncates the distal collagen-like domain, which is essential for ligand recognition. We tested whet...

  6. Auto-tuning Non-blocking Collective Communication Operations

    KAUST Repository

    Barigou, Youcef; Venkatesan, Vishwanath; Gabriel, Edgar

    2015-01-01

    Collective operations are widely used in large scale scientific applications, and critical to the scalability of these applications for large process counts. It has also been demonstrated that collective operations have to be carefully tuned for a given platform and application scenario to maximize their performance. Non-blocking collective operations extend the concept of collective operations by offering the additional benefit of being able to overlap communication and computation. This paper presents the automatic run-time tuning of non-blocking collective communication operations, which allows the communication library to choose the best performing implementation for a non-blocking collective operation on a case by case basis. The paper demonstrates that libraries using a single algorithm or implementation for a non-blocking collective operation will inevitably lead to suboptimal performance in many scenarios, and thus validate the necessity for run-time tuning of these operations. The benefits of the approach are further demonstrated for an application kernel using a multi-dimensional Fast Fourier Transform. The results obtained for the application scenario indicate a performance improvement of up to 40% compared to the current state of the art.

  7. Auto-tuning Non-blocking Collective Communication Operations

    KAUST Repository

    Barigou, Youcef

    2015-05-01

    Collective operations are widely used in large scale scientific applications, and critical to the scalability of these applications for large process counts. It has also been demonstrated that collective operations have to be carefully tuned for a given platform and application scenario to maximize their performance. Non-blocking collective operations extend the concept of collective operations by offering the additional benefit of being able to overlap communication and computation. This paper presents the automatic run-time tuning of non-blocking collective communication operations, which allows the communication library to choose the best performing implementation for a non-blocking collective operation on a case by case basis. The paper demonstrates that libraries using a single algorithm or implementation for a non-blocking collective operation will inevitably lead to suboptimal performance in many scenarios, and thus validate the necessity for run-time tuning of these operations. The benefits of the approach are further demonstrated for an application kernel using a multi-dimensional Fast Fourier Transform. The results obtained for the application scenario indicate a performance improvement of up to 40% compared to the current state of the art.

  8. Novel Quantum Encryption Algorithm Based on Multiqubit Quantum Shift Register and Hill Cipher

    International Nuclear Information System (INIS)

    Khalaf, Rifaat Zaidan; Abdullah, Alharith Abdulkareem

    2014-01-01

    Based on a quantum shift register, a novel quantum block cryptographic algorithm that can be used to encrypt classical messages is proposed. The message is encoded and decoded by using a code generated by the quantum shift register. The security of this algorithm is analysed in detail. It is shown that, in the quantum block cryptographic algorithm, two keys can be used. One of them is the classical key that is used in the Hill cipher algorithm where Alice and Bob use the authenticated Diffie Hellman key exchange algorithm using the concept of digital signature for the authentication of the two communicating parties and so eliminate the man-in-the-middle attack. The other key is generated by the quantum shift register and used for the coding of the encryption message, where Alice and Bob share the key by using the BB84 protocol. The novel algorithm can prevent a quantum attack strategy as well as a classical attack strategy. The problem of key management is discussed and circuits for the encryption and the decryption are suggested

  9. Cooperative Orthogonal Space-Time-Frequency Block Codes over a MIMO-OFDM Frequency Selective Channel

    Directory of Open Access Journals (Sweden)

    M. Rezaei

    2016-03-01

    Full Text Available In this paper, a cooperative algorithm to improve the orthogonal space-timefrequency block codes (OSTFBC in frequency selective channels for 2*1, 2*2, 4*1, 4*2 MIMO-OFDM systems, is presented. The algorithm of three node, a source node, a relay node and a destination node is formed, and is implemented in two stages. During the first stage, the destination and the relay antennas receive the symbols sent by the source antennas. The destination node and the relay node obtain the decision variables employing time-space-frequency decoding process by the received signals. During the second stage, the relay node transmits decision variables to the destination node. Due to the increasing diversity in the proposed algorithm, decision variables in the destination node are increased to improve system performance. The bit error rate of the proposed algorithm at high SNR is estimated by considering the BPSK modulation. The simulation results show that cooperative orthogonal space-time-frequency block coding, improves system performance and reduces the BER in a frequency selective channel.

  10. Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences

    Directory of Open Access Journals (Sweden)

    Guo Bao-long

    2004-09-01

    Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.

  11. A Probabilistic Analysis of the Nxt Forging Algorithm

    Directory of Open Access Journals (Sweden)

    Serguei Popov

    2016-12-01

    Full Text Available We discuss the forging algorithm of Nxt from a probabilistic point of view, and obtain explicit formulas and estimates for several important quantities, such as the probability that an account generates a block, the length of the longest sequence of consecutive blocks generated by one account, and the probability that one concurrent blockchain wins over an- other one. Also, we discuss some attack vectors related to splitting an account into many smaller ones.

  12. Distribution agnostic structured sparsity recovery algorithms

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2013-05-01

    We present an algorithm and its variants for sparse signal recovery from a small number of its measurements in a distribution agnostic manner. The proposed algorithm finds Bayesian estimate of a sparse signal to be recovered and at the same time is indifferent to the actual distribution of its non-zero elements. Termed Support Agnostic Bayesian Matching Pursuit (SABMP), the algorithm also has the capability of refining the estimates of signal and required parameters in the absence of the exact parameter values. The inherent feature of the algorithm of being agnostic to the distribution of the data grants it the flexibility to adapt itself to several related problems. Specifically, we present two important extensions to this algorithm. One extension handles the problem of recovering sparse signals having block structures while the other handles multiple measurement vectors to jointly estimate the related unknown signals. We conduct extensive experiments to show that SABMP and its variants have superior performance to most of the state-of-the-art algorithms and that too at low-computational expense. © 2013 IEEE.

  13. Analysis and Application of High Resolution Numerical Perturbation Algorithm for Convective-Diffusion Equation

    International Nuclear Information System (INIS)

    Gao Zhi; Shen Yi-Qing

    2012-01-01

    The high resolution numerical perturbation (NP) algorithm is analyzed and tested using various convective-diffusion equations. The NP algorithm is constructed by splitting the second order central difference schemes of both convective and diffusion terms of the convective-diffusion equation into upstream and downstream parts, then the perturbation reconstruction functions of the convective coefficient are determined using the power-series of grid interval and eliminating the truncated errors of the modified differential equation. The important nature, i.e. the upwind dominance nature, which is the basis to ensuring that the NP schemes are stable and essentially oscillation free, is firstly presented and verified. Various numerical cases show that the NP schemes are efficient, robust, and more accurate than the original second order central scheme

  14. A cone-beam tomography system with a reduced size planar detector: A backprojection-filtration reconstruction algorithm as well as numerical and practical experiments

    International Nuclear Information System (INIS)

    Li Liang; Chen Zhiqiang; Zhang Li; Xing Yuxiang; Kang Kejun

    2007-01-01

    In a traditional cone-beam computed tomography (CT) system, the cost of product and computation is very high. In this paper, we develop a transversely truncated cone-beam X-ray CT system with a reduced-size detector positioned off-center, in which X-ray beams only cover half of the object. The existing filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms are not directly applicable in this new system. Hence, we develop a BPF-type direct backprojection algorithm. Different from the traditional rebinning methods, our algorithm directly backprojects the pretreated projection data without rebinning. This makes the algorithm compact and computationally more efficient. Because of avoiding interpolation errors of rebinning process, higher spatial resolution is obtained. Finally, some numerical simulations and practical experiments are done to validate the proposed algorithm and compare with rebinning algorithm

  15. Magnet sorting algorithms for insertion devices for the Advanced Light Source

    International Nuclear Information System (INIS)

    Humphries, D.; Hoyer, E.; Kincaid, B.; Marks, S.; Schlueter, R.

    1994-01-01

    Insertion devices for the Advanced Light Source (ALS) incorporate up to 3,000 magnet blocks each for pole energization. In order to minimize field errors, these magnets must be measured, sorted and assigned appropriate locations and orientation in the magnetic structures. Sorting must address multiple objectives, including pole excitation and minimization of integrated multipole fields from minor field components in the magnets. This is equivalent to a combinatorial minimization problem with a large configuration space. Multi-stage sorting algorithms use ordering and pairing schemes in conjunction with other combinatorial methods to solve the minimization problem. This paper discusses objective functions, solution algorithms and results of application to magnet block measurement data

  16. A Pilot-Pattern Based Algorithm for MIMO-OFDM Channel Estimation

    Directory of Open Access Journals (Sweden)

    Guomin Li

    2016-12-01

    Full Text Available An improved pilot pattern algorithm for facilitating the channel estimation in multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM systems is proposed in this paper. The presented algorithm reconfigures the parameter in the least square (LS algorithm, which belongs to the space-time block-coded (STBC category for channel estimation in pilot-based MIMO-OFDM system. Simulation results show that the algorithm has better performance in contrast to the classical single symbol scheme. In contrast to the double symbols scheme, the proposed algorithm can achieve nearly the same performance with only half of the complexity of the double symbols scheme.

  17. The evidence for synthesis of truncated triangular silver nanoplates in the presence of CTAB

    International Nuclear Information System (INIS)

    He Xin; Zhao Xiujian; Chen Yunxia; Feng Jinyang

    2008-01-01

    Truncated triangular silver nanoplates were prepared by a solution-phase approach, which involved the seed-mediated growth of silver nanoparticles in the presence of cetyltrimethylammonium bromide (CTAB) at 40 deg. C. The result of X-ray diffraction indicates that the as-prepared nanoparticles are made of pure face centered cubic silver. Transmission electron microscopy and atomic force microscopy studies show that the truncated triangular silver nanoplates, with edge lengths of 50 ± 5 nm and thicknesses of 27 ± 3 nm, are oriented differently on substrates of a copper grid and a fresh mica flake. The corners of these nanoplates are round. The selected area electron diffraction analysis reveals that the silver nanoplates are single crystals with an atomically flat surface. We determine the holistic morphology of truncated triangular silver nanoplates through the above measurements with the aid of computer-aided 3D perspective images

  18. Survey and Benchmark of Block Ciphers for Wireless Sensor Networks

    NARCIS (Netherlands)

    Law, Y.W.; Doumen, J.M.; Hartel, Pieter H.

    Cryptographic algorithms play an important role in the security architecture of wireless sensor networks (WSNs). Choosing the most storage- and energy-efficient block cipher is essential, due to the facts that these networks are meant to operate without human intervention for a long period of time

  19. Cross-layer combining of power control and adaptive modulation with truncated ARQ for cognitive radios

    Institute of Scientific and Technical Information of China (English)

    CHENG Shi-lun; YANG Zhen

    2008-01-01

    To maximize throughput and to satisfy users' requirements in cognitive radios, a cross-layer optimization problem combining adaptive modulation and power control at the physical layer and truncated automatic repeat request at the medium access control layer is proposed. Simulation results show the combination of power control, adaptive modulation, and truncated automatic repeat request can regulate transmitter powers and increase the total throughput effectively.

  20. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix

    International Nuclear Information System (INIS)

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-01-01

    We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer

  1. Exact solution of corner-modified banded block-Toeplitz eigensystems

    International Nuclear Information System (INIS)

    Cobanera, Emilio; Alase, Abhijeet; Viola, Lorenza; Ortiz, Gerardo

    2017-01-01

    Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified . Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz , independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix , whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev. (paper)

  2. Cryptanalysis on an image block encryption algorithm based on spatiotemporal chaos

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; He Guo-Xiang

    2012-01-01

    An image block encryption scheme based on spatiotemporal chaos has been proposed recently. In this paper, we analyse the security weakness of the proposal. The main problem of the original scheme is that the generated keystream remains unchanged for encrypting every image. Based on the flaws, we demonstrate a chosen plaintext attack for revealing the equivalent keys with only 6 pairs of plaintext/ciphertext used. Finally, experimental results show the validity of our attack. (general)

  3. Energy cascade with small-scale thermalization, counterflow metastability, and anomalous velocity of vortex rings in Fourier-truncated Gross-Pitaevskii equation

    International Nuclear Information System (INIS)

    Krstulovic, Giorgio; Brachet, Marc

    2011-01-01

    The statistical equilibria of the (conservative) dynamics of the Gross-Pitaevskii equation (GPE) with a finite range of spatial Fourier modes are characterized using a new algorithm, based on a stochastically forced Ginzburg-Landau equation (SGLE), that directly generates grand-canonical distributions. The SGLE-generated distributions are validated against finite-temperature GPE-thermalized states and exact (low-temperature) results obtained by steepest descent on the (grand-canonical) partition function. A standard finite-temperature second-order λ transition is exhibited. A mechanism of GPE thermalization through a direct cascade of energy is found using initial conditions with mass and energy distributed at large scales. A long transient with partial thermalization at small scales is observed before the system reaches equilibrium. Vortices are shown to disappear as a prelude to final thermalization and their annihilation is related to the contraction of vortex rings due to mutual friction. Increasing the amount of dispersion at the truncation wave number is shown to slow thermalization and vortex annihilation. A bottleneck that produces spontaneous effective self-truncation with partial thermalization is characterized in the limit of large dispersive effects. Metastable counterflow states, with nonzero values of momentum, are generated using the SGLE algorithm. Spontaneous nucleation of the vortex ring is observed and the corresponding Arrhenius law is characterized. Dynamical counterflow effects on vortex evolution are investigated using two exact solutions of the GPE: traveling vortex rings and a motionless crystal-like lattice of vortex lines. Longitudinal effects are produced and measured on the crystal lattice. A dilatation of vortex rings is obtained for counterflows larger than their translational velocity. The vortex ring translational velocity has a dependence on temperature that is an order of magnitude above that of the crystal lattice, an effect

  4. Tau truncation is a productive posttranslational modification of neurofibrillary degeneration in Alzheimer's disease.

    Science.gov (United States)

    Kovacech, B; Novak, M

    2010-12-01

    Deposits of the misfolded neuronal protein tau are major hallmarks of neurodegeneration in Alzheimer's disease (AD) and other tauopathies. The etiology of the transformation process of the intrinsically disordered soluble protein tau into the insoluble misordered aggregate has attracted much attention. Tau undergoes multiple modifications in AD, most notably hyperphosphorylation and truncation. Hyperphosphorylation is widely regarded as the hottest candidate for the inducer of the neurofibrillary pathology. However, the true nature of the impetus that initiates the whole process in the human brains remains unknown. In AD, several site-specific tau cleavages were identified and became connected to the progression of the disease. In addition, western blot analyses of tau species in AD brains reveal multitudes of various truncated forms. In this review we summarize evidence showing that tau truncation alone is sufficient to induce the complete cascade of neurofibrillary pathology, including hyperphosphorylation and accumulation of misfolded insoluble forms of tau. Therefore, proteolytical abnormalities in the stressed neurons and production of aberrant tau cleavage products deserve closer attention and should be considered as early therapeutic targets for Alzheimer's disease.

  5. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  6. Amplitude of Light Scattering by a Truncated Pyramid and Cone in the Rayleigh-Gans-Debye Approximation

    Directory of Open Access Journals (Sweden)

    Konstantin A. Shapovalov

    2013-01-01

    Full Text Available The article considers general approach to structured particle and particle system form factor calculation in the Rayleigh-Gans-Debye (RGD approximation. Using this approach, amplitude of light scattering by a truncated pyramid and cone formulas in RGD approximation are obtained. Light scattering indicator by a truncated pyramid and cone in the RGD approximation are calculated.

  7. Protoplanetary disc truncation mechanisms in stellar clusters: comparing external photoevaporation and tidal encounters

    Science.gov (United States)

    Winter, A. J.; Clarke, C. J.; Rosotti, G.; Ih, J.; Facchini, S.; Haworth, T. J.

    2018-04-01

    Most stars form and spend their early life in regions of enhanced stellar density. Therefore the evolution of protoplanetary discs (PPDs) hosted by such stars are subject to the influence of other members of the cluster. Physically, PPDs might be truncated either by photoevaporation due to ultraviolet flux from massive stars, or tidal truncation due to close stellar encounters. Here we aim to compare the two effects in real cluster environments. In this vein we first review the properties of well studied stellar clusters with a focus on stellar number density, which largely dictates the degree of tidal truncation, and far ultraviolet (FUV) flux, which is indicative of the rate of external photoevaporation. We then review the theoretical PPD truncation radius due to an arbitrary encounter, additionally taking into account the role of eccentric encounters that play a role in hot clusters with a 1D velocity dispersion σv ≳ 2 km/s. Our treatment is then applied statistically to varying local environments to establish a canonical threshold for the local stellar density (nc ≳ 104 pc-3) for which encounters can play a significant role in shaping the distribution of PPD radii over a timescale ˜3 Myr. By combining theoretical mass loss rates due to FUV flux with viscous spreading in a PPD we establish a similar threshold for which a massive disc is completely destroyed by external photoevaporation. Comparing these thresholds in local clusters we find that if either mechanism has a significant impact on the PPD population then photoevaporation is always the dominating influence.

  8. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-03-01

    Full Text Available A sub-block algorithm is usually applied in the super-resolution (SR reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.

  9. Total mass difference statistics algorithm: a new approach to identification of high-mass building blocks in electrospray ionization Fourier transform ion cyclotron mass spectrometry data of natural organic matter.

    Science.gov (United States)

    Kunenkov, Erast V; Kononikhin, Alexey S; Perminova, Irina V; Hertkorn, Norbert; Gaspar, Andras; Schmitt-Kopplin, Philippe; Popov, Igor A; Garmash, Andrew V; Nikolaev, Evgeniy N

    2009-12-15

    The ultrahigh-resolution Fourier transform ion cyclotron resonance (FTICR) mass spectrum of natural organic matter (NOM) contains several thousand peaks with dozens of molecules matching the same nominal mass. Such a complexity poses a significant challenge for automatic data interpretation, in which the most difficult task is molecular formula assignment, especially in the case of heavy and/or multielement ions. In this study, a new universal algorithm for automatic treatment of FTICR mass spectra of NOM and humic substances based on total mass difference statistics (TMDS) has been developed and implemented. The algorithm enables a blind search for unknown building blocks (instead of a priori known ones) by revealing repetitive patterns present in spectra. In this respect, it differs from all previously developed approaches. This algorithm was implemented in designing FIRAN-software for fully automated analysis of mass data with high peak density. The specific feature of FIRAN is its ability to assign formulas to heavy and/or multielement molecules using "virtual elements" approach. To verify the approach, it was used for processing mass spectra of sodium polystyrene sulfonate (PSS, M(w) = 2200 Da) and polymethacrylate (PMA, M(w) = 3290 Da) which produce heavy multielement and multiply-charged ions. Application of TMDS identified unambiguously monomers present in the polymers consistent with their structure: C(8)H(7)SO(3)Na for PSS and C(4)H(6)O(2) for PMA. It also allowed unambiguous formula assignment to all multiply-charged peaks including the heaviest peak in PMA spectrum at mass 4025.6625 with charge state 6- (mass bias -0.33 ppm). Application of the TMDS-algorithm to processing data on the Suwannee River FA has proven its unique capacities in analysis of spectra with high peak density: it has not only identified the known small building blocks in the structure of FA such as CH(2), H(2), C(2)H(2)O, O but the heavier unit at 154.027 amu. The latter was

  10. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  11. ProxImaL: efficient image optimization using proximal algorithms

    KAUST Repository

    Heide, Felix; Diamond, Steven; Nieß ner, Matthias; Ragan-Kelley, Jonathan; Heidrich, Wolfgang; Wetzstein, Gordon

    2016-01-01

    domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety

  12. ALGORITHMIC FACILITIES AND SOFTWARE FOR VIRTUAL DESIGN OF ANTI-BLOCK AND COUNTER-SLIPPING SYSTEMS

    Directory of Open Access Journals (Sweden)

    N. N. Hurski

    2009-01-01

    Full Text Available The paper considers algorithms of designing a roadway covering for virtual test of mobile machine movement dynamics; an algorithm of forming actual values of forces/moments in «road–wheel–car» contact and their derivatives, and also a software for virtual designing of mobile machine dynamics.

  13. Five-dimensional truncation of the plane incompressible navier-stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Boldrighini, C [Camerino Univ. (Italy). Istituto di Matematica; Franceschini, V [Modena Univ. (Italy). Istituto Matematico

    1979-01-01

    A five-modes truncation of the Navier-Stokes equations for a two dimensional incompressible fluid on a torus is considered. A computer analysis shows that for a certain range of the Reynolds number the system exhibits a stochastic behaviour, approached through an involved sequence of bifurcations.

  14. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  15. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    Science.gov (United States)

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  16. Exact method for the simulation of Coulombic systems by spherically truncated, pairwise r-1 summation

    International Nuclear Information System (INIS)

    Wolf, D.; Keblinski, P.; Phillpot, S.R.; Eggebrecht, J.

    1999-01-01

    Based on a recent result showing that the net Coulomb potential in condensed ionic systems is rather short ranged, an exact and physically transparent method permitting the evaluation of the Coulomb potential by direct summation over the r -1 Coulomb pair potential is presented. The key observation is that the problems encountered in determining the Coulomb energy by pairwise, spherically truncated r -1 summation are a direct consequence of the fact that the system summed over is practically never neutral. A simple method is developed that achieves charge neutralization wherever the r -1 pair potential is truncated. This enables the extraction of the Coulomb energy, forces, and stresses from a spherically truncated, usually charged environment in a manner that is independent of the grouping of the pair terms. The close connection of our approach with the Ewald method is demonstrated and exploited, providing an efficient method for the simulation of even highly disordered ionic systems by direct, pairwise r -1 summation with spherical truncation at rather short range, i.e., a method which fully exploits the short-ranged nature of the interactions in ionic systems. The method is validated by simulations of crystals, liquids, and interfacial systems, such as free surfaces and grain boundaries. copyright 1999 American Institute of Physics

  17. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    Science.gov (United States)

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  18. Optical asymmetric cryptography based on elliptical polarized light linear truncation and a numerical reconstruction technique.

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng

    2014-06-20

    We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.

  19. [Construction of haplotype and haplotype block based on tag single nucleotide polymorphisms and their applications in association studies].

    Science.gov (United States)

    Gu, Ming-liang; Chu, Jia-you

    2007-12-01

    Human genome has structures of haplotype and haplotype block which provide valuable information on human evolutionary history and may lead to the development of more efficient strategies to identify genetic variants that increase susceptibility to complex diseases. Haplotype block can be divided into discrete blocks of limited haplotype diversity. In each block, a small fraction of ptag SNPsq can be used to distinguish a large fraction of the haplotypes. These tag SNPs can be potentially useful for construction of haplotype and haplotype block, and association studies in complex diseases. There are two general classes of methods to construct haplotype and haplotype blocks based on genotypes on large pedigrees and statistical algorithms respectively. The author evaluate several construction methods to assess the power of different association tests with a variety of disease models and block-partitioning criteria. The advantages, limitations and applications of each method and the application in the association studies are discussed equitably. With the completion of the HapMap and development of statistical algorithms for addressing haplotype reconstruction, ideas of construction of haplotype based on combination of mathematics, physics, and computer science etc will have profound impacts on population genetics, location and cloning for susceptible genes in complex diseases, and related domain with life science etc.

  20. The Truncated Lognormal Distribution as a Luminosity Function for SWIFT-BAT Gamma-Ray Bursts

    Directory of Open Access Journals (Sweden)

    Lorenzo Zaninetti

    2016-11-01

    Full Text Available The determination of the luminosity function (LF in Gamma ray bursts (GRBs depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here, we analyze three cosmologies: the standard cosmology, the plasma cosmology and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law and, secondly, by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.

  1. Cognitive Radio Transceivers: RF, Spectrum Sensing, and Learning Algorithms Review

    Directory of Open Access Journals (Sweden)

    Lise Safatly

    2014-01-01

    reconfigurable radio frequency (RF parts, enhanced spectrum sensing algorithms, and sophisticated machine learning techniques. In this paper, we present a review of the recent advances in CR transceivers hardware design and algorithms. For the RF part, three types of antennas are presented: UWB antennas, frequency-reconfigurable/tunable antennas, and UWB antennas with reconfigurable band notches. The main challenges faced by the design of the other RF blocks are also discussed. Sophisticated spectrum sensing algorithms that overcome main sensing challenges such as model uncertainty, hardware impairments, and wideband sensing are highlighted. The cognitive engine features are discussed. Moreover, we study unsupervised classification algorithms and a reinforcement learning (RL algorithm that has been proposed to perform decision-making in CR networks.

  2. Variational optimization algorithms for uniform matrix product states

    Science.gov (United States)

    Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.

    2018-01-01

    We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.

  3. Production, purification and characterization of polyclonal antibody against the truncated gK of the duck enteritis virus

    Directory of Open Access Journals (Sweden)

    Zhang Shunchuan

    2010-09-01

    Full Text Available Abstract Duck virus enteritis (DVE is an acute, contagious herpesvirus infection of ducks, geese, and swans, which has produced significant economic losses in domestic and wild waterfowl. With the purpose of decreasing economic losses in the commercial duck industry, studying the unknown glycoprotein K (gK of DEV may be a new method for preferably preventing and curing this disease. So this is the first time to product and purify the rabbit anti-tgK polyclonal antibody. Through the western blot and ELISA assay, the truncated glycoprotein K (tgK has good antigenicity, also the antibody possesses high specificity and affinity. Meanwhile the rabbit anti-tgK polyclonal antibody has the potential to produce subunit vaccines and the functions of neutralizing DEV and anti-DEV infection because of its neutralization titer. Indirect immunofluorescent microscopy using the purified rabbit anti-tgK polyclonal antibody as diagnostic antibody was susceptive to detect a small quantity of antigen in tissues or cells. This approach also provides effective experimental technology for epidemiological investigation and retrospective diagnose of the preservative paraffin blocks.

  4. Generation of truncated recombinant form of tumor necrosis factor ...

    African Journals Online (AJOL)

    Purpose: To produce truncated recombinant form of tumor necrosis factor receptor 1 (TNFR1), cysteine-rich domain 2 (CRD2) and CRD3 regions of the receptor were generated using pET28a and E. coli/BL21. Methods: DNA coding sequence of CRD2 and CRD3 was cloned into pET28a vector and the corresponding ...

  5. Efficient Record Linkage Algorithms Using Complete Linkage Clustering.

    Science.gov (United States)

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.

  6. A constrained tracking algorithm to optimize plug patterns in multiple isocenter Gamma Knife radiosurgery planning

    International Nuclear Information System (INIS)

    Li Kaile; Ma Lijun

    2005-01-01

    We developed a source blocking optimization algorithm for Gamma Knife radiosurgery, which is based on tracking individual source contributions to arbitrarily shaped target and critical structure volumes. A scalar objective function and a direct search algorithm were used to produce near real-time calculation results. The algorithm allows the user to set and vary the total number of plugs for each shot to limit the total beam-on time. We implemented and tested the algorithm for several multiple-isocenter Gamma Knife cases. It was found that the use of limited number of plugs significantly lowered the integral dose to the critical structures such as an optical chiasm in pituitary adenoma cases. The main effect of the source blocking is the faster dose falloff in the junction area between the target and the critical structure. In summary, we demonstrated a useful source-plugging algorithm for improving complex multi-isocenter Gamma Knife treatment planning cases

  7. Application of the AMPLE cluster-and-truncate approach to NMR structures for molecular replacement

    Energy Technology Data Exchange (ETDEWEB)

    Bibby, Jaclyn [University of Liverpool, Liverpool L69 7ZB (United Kingdom); Keegan, Ronan M. [Research Complex at Harwell, STFC Rutherford Appleton Laboratory, Didcot OX11 0FA (United Kingdom); Mayans, Olga [University of Liverpool, Liverpool L69 7ZB (United Kingdom); Winn, Martyn D. [Science and Technology Facilities Council Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Rigden, Daniel J., E-mail: drigden@liv.ac.uk [University of Liverpool, Liverpool L69 7ZB (United Kingdom)

    2013-11-01

    Processing of NMR structures for molecular replacement by AMPLE works well. AMPLE is a program developed for clustering and truncating ab initio protein structure predictions into search models for molecular replacement. Here, it is shown that its core cluster-and-truncate methods also work well for processing NMR ensembles into search models. Rosetta remodelling helps to extend success to NMR structures bearing low sequence identity or high structural divergence from the target protein. Potential future routes to improved performance are considered and practical, general guidelines on using AMPLE are provided.

  8. Cross-layer combining of adaptive modulation and truncated ARQ under cognitive radio resource requirements

    KAUST Repository

    Yang, Yuli

    2012-11-01

    In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU\\'s packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU\\'s packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.

  9. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    Science.gov (United States)

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  10. Truncation effects in the functional renormalization group study of spontaneous symmetry breaking

    International Nuclear Information System (INIS)

    Defenu, N.; Mati, P.; Márián, I.G.; Nándori, I.; Trombettoni, A.

    2015-01-01

    We study the occurrence of spontaneous symmetry breaking (SSB) for O(N) models using functional renormalization group techniques. We show that even the local potential approximation (LPA) when treated exactly is sufficient to give qualitatively correct results for systems with continuous symmetry, in agreement with the Mermin-Wagner theorem and its extension to systems with fractional dimensions. For general N (including the Ising model N=1) we study the solutions of the LPA equations for various truncations around the zero field using a finite number of terms (and different regulators), showing that SSB always occurs even where it should not. The SSB is signalled by Wilson-Fisher fixed points which for any truncation are shown to stay on the line defined by vanishing mass beta functions.

  11. Real-time algorithm for acoustic imaging with a microphone array.

    Science.gov (United States)

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  12. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis

    Science.gov (United States)

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  13. Firewalls as artefacts of inconsistent truncations of quantum geometries

    Science.gov (United States)

    Germani, Cristiano; Sarkar, Debajyoti

    2016-01-01

    In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order ${\\cal O}(e^{-S})$, inexorably leads to a "localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by ${\\cal O}(e^{-S})$ effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like ${\\cal O}( g^{tt} e^{-S})$. Therefore, while at a distance far away from the horizon, where $g^{tt}\\approx 1$, quantum corrections {\\it are} perturbative, they {\\it do} diverge close to the horizon, where $g^{tt}\\rightarrow \\infty$. Nevertheless, these "corrections" nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes.

  14. Eigenvalue routines in NASTRAN: A comparison with the Block Lanczos method

    Science.gov (United States)

    Tischler, V. A.; Venkayya, Vipperla B.

    1993-01-01

    The NASA STRuctural ANalysis (NASTRAN) program is one of the most extensively used engineering applications software in the world. It contains a wealth of matrix operations and numerical solution techniques, and they were used to construct efficient eigenvalue routines. The purpose of this paper is to examine the current eigenvalue routines in NASTRAN and to make efficiency comparisons with a more recent implementation of the Block Lanczos algorithm by Boeing Computer Services (BCS). This eigenvalue routine is now available in the BCS mathematics library as well as in several commercial versions of NASTRAN. In addition, CRAY maintains a modified version of this routine on their network. Several example problems, with a varying number of degrees of freedom, were selected primarily for efficiency bench-marking. Accuracy is not an issue, because they all gave comparable results. The Block Lanczos algorithm was found to be extremely efficient, in particular, for very large size problems.

  15. An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm.

    Science.gov (United States)

    Qin, Qin; Li, Jianqing; Yue, Yinggao; Liu, Chengyu

    2017-01-01

    R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method.

  16. Implementation of RC5 and RC6 block ciphers on digital images

    International Nuclear Information System (INIS)

    Belhaj Mohamed, A.; Zaibi, G.; Kachouri, A.

    2011-01-01

    With the fast evolution of the networks technology, the security becomes an important research axis. Many types of communication require the transmission of digital images. This transmission must be safe especially in applications that require a fairly high level of security such as military applications, spying, radars, and biometrics applications. Mechanisms for authentication, confidentiality, and integrity must be implemented within their community. For this reason, several cryptographic algorithms have been developed to ensure the safety and reliability of this transmission. In this paper, we investigate the encryption efficiency of RC5 and RC6 block cipher applied to digital images by including a statistical and differential analysis then, and also we investigate those two block ciphers against errors in ambient noise. The security analysis shows that RC6 algorithm is more secure than RC5. However, using RC6 to encrypt images in rough environment (low signal to noise ratio) leads to more errors (almost double of RC5) and may increase energy consumption by retransmitting erroneous packets. A compromise security/energy must be taken into account for the good choice of encryption algorithm.

  17. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    Science.gov (United States)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  18. Reduction of truncation errors in planar near-field aperture antenna measurements using the method of alternating orthogonal projections

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2006-01-01

    A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...

  19. Hybrid particle swarm optimization with Cauchy distribution for solving reentrant flexible flow shop with blocking constraint

    Directory of Open Access Journals (Sweden)

    Chatnugrob Sangsawang

    2016-06-01

    Full Text Available This paper addresses a problem of the two-stage flexible flow shop with reentrant and blocking constraints in Hard Disk Drive Manufacturing. This problem can be formulated as a deterministic FFS|stage=2,rcrc, block|Cmax problem. In this study, adaptive Hybrid Particle Swarm Optimization with Cauchy distribution (HPSO was developed to solve the problem. The objective of this research is to find the sequences in order to minimize the makespan. To show their performances, computational experiments were performed on a number of test problems and the results are reported. Experimental results show that the proposed algorithms give better solutions than the classical Particle Swarm Optimization (PSO for all test problems. Additionally, the relative improvement (RI of the makespan solutions obtained by the proposed algorithms with respect to those of the current practice is performed in order to measure the quality of the makespan solutions generated by the proposed algorithms. The RI results show that the HPSO algorithm can improve the makespan solution by averages of 14.78%.

  20. Different truncation methods of AUC between Japan and the EU for bioequivalence assessment: influence on the regulatory judgment.

    Science.gov (United States)

    Oishi, Masayo; Chiba, Koji; Fukushima, Takashi; Tomono, Yoshiro; Suwa, Toshio

    2012-01-01

    In regulatory guidelines for bioequivalence (BE) assessment, the definitions of AUC for primary assessment are different in ICH countries, i.e., AUC from zero to the last sampling point (AUCall) in Japan, AUC from zero to infinity (AUCinf) or AUC from zero to the last measurable point (AUClast) in the US, and AUClast in the EU. To assure sufficient accuracy of truncated AUC for BE assessment, the ratio of truncated AUC (AUCall or AUClast) to AUCinf should be more than 80% both in Japanese and EU guidelines. We investigated how the difference in the definition of truncated AUC affects BE assessment of sustained release (SR) formulation. Our simulation result demonstrated that AUCall/AUCinf could be ≥80% despite AUClast/AUCinf being AUC affected the judgment of validity of truncated AUC for BE assessment, and AUCall could fail to detect the substantially different in vivo dissolution profile of generic drugs with SR formulation from the original drug.

  1. A parallel adaptive mesh refinement algorithm for predicting turbulent non-premixed combusting flows

    International Nuclear Information System (INIS)

    Gao, X.; Groth, C.P.T.

    2005-01-01

    A parallel adaptive mesh refinement (AMR) algorithm is proposed for predicting turbulent non-premixed combusting flows characteristic of gas turbine engine combustors. The Favre-averaged Navier-Stokes equations governing mixture and species transport for a reactive mixture of thermally perfect gases in two dimensions, the two transport equations of the κ-ψ turbulence model, and the time-averaged species transport equations, are all solved using a fully coupled finite-volume formulation. A flexible block-based hierarchical data structure is used to maintain the connectivity of the solution blocks in the multi-block mesh and facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. This AMR approach allows for anisotropic mesh refinement and the block-based data structure readily permits efficient and scalable implementations of the algorithm on multi-processor architectures. Numerical results for turbulent non-premixed diffusion flames, including cold- and hot-flow predictions for a bluff body burner, are described and compared to available experimental data. The numerical results demonstrate the validity and potential of the parallel AMR approach for predicting complex non-premixed turbulent combusting flows. (author)

  2. Reconstruction of a uniform star object from interior x-ray data: uniqueness, stability and algorithm

    International Nuclear Information System (INIS)

    Van Gompel, Gert; Batenburg, K Joost; Defrise, Michel

    2009-01-01

    In this paper we consider the problem of reconstructing a two-dimensional star-shaped object of uniform density from truncated projections of the object. In particular, we prove that such an object is uniquely determined by its parallel projections sampled over a full π angular range with a detector that only covers an interior field-of-view, even if the density of the object is not known a priori. We analyze the stability of this reconstruction problem and propose a reconstruction algorithm. Simulation experiments demonstrate that the algorithm is capable of reconstructing a star-shaped object from interior data, even if the interior region is much smaller than the size of the object. In addition, we present results for a heuristic reconstruction algorithm called DART, that was recently proposed. The heuristic method is shown to yield accurate reconstructions if the density is known in advance, and to have a very good stability in the presence of noisy projection data. Finally, the performance of the DBP and DART algorithms is illustrated for the reconstruction of real micro-CT data of a diamond

  3. A simple derivation and analysis of a helical cone beam tomographic algorithm for long object imaging via a novel definition of region of interest

    International Nuclear Information System (INIS)

    Hu Jicun; Tam, Kwok; Johnson, Roger H

    2004-01-01

    We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom

  4. Short-Block Protograph-Based LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  5. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data

    Directory of Open Access Journals (Sweden)

    I.E. Okorie

    2017-06-01

    Full Text Available The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood (−ℓˆ, Akaike information criterion (AIC, Bayesian information criterion (BIC and the generalized Cramér–von Mises W⋆ statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions. Keywords: Mathematics, Applied mathematics

  6. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  7. A Novel Quad Harmony Search Algorithm for Grid-Based Path Finding

    Directory of Open Access Journals (Sweden)

    Saso Koceski

    2014-09-01

    Full Text Available A novel approach to the problem of grid-based path finding has been introduced. The method is a block-based search algorithm, founded on the bases of two algorithms, namely the quad-tree algorithm, which offered a great opportunity for decreasing the time needed to compute the solution, and the harmony search (HS algorithm, a meta-heuristic algorithm used to obtain the optimal solution. This quad HS algorithm uses the quad-tree decomposition of free space in the grid to mark the free areas and treat them as a single node, which greatly improves the execution. The results of the quad HS algorithm have been compared to other meta-heuristic algorithms, i.e., ant colony, genetic algorithm, particle swarm optimization and simulated annealing, and it was proved to obtain the best results in terms of time and giving the optimal path.

  8. The Modified Frequency Algorithm of Digital Watermarking of Still Images Resistant to JPEG Compression

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-01-01

    Full Text Available Digital watermarking is an effective copyright protection for multimedia products (in particular, still images. Digital marking represents process of embedding into object of protection of a digital watermark which is invisible for a human eye. However there is rather large number of the harmful influences capable to destroy the watermark which is embedded into the still image. The most widespread attack is JPEG compression that is caused by efficiency of this format of compression and its big prevalence on the Internet.The new algorithm which is modification of algorithm of Elham is presented in the present article. The algorithm of digital marking of motionless images carries out embedding of a watermark in frequency coefficients of discrete Hadamard transform of the chosen image blocks. The choice of blocks of the image for embedding of a digital watermark is carried out on the basis of the set threshold of entropy of pixels. The choice of low-frequency coefficients for embedding is carried out on the basis of comparison of values of coefficients of discrete cosine transformation with a predetermined threshold, depending on the product of the built-in watermark coefficient on change coefficient.Resistance of new algorithm to compression of JPEG, noising, filtration, change of color, the size and histogram equalization is in details analysed. Research of algorithm consists in comparison of the appearance taken from the damaged image of a watermark with the introduced logo. Ability of algorithm to embedding of a watermark with a minimum level of distortions of the image is in addition analysed. It is established that the new algorithm in comparison by initial algorithm of Elham showed full resistance to compression of JPEG, and also the improved resistance to a noising, change of brightness and histogram equalization.The developed algorithm can be used for copyright protection on the static images. Further studies will be used to study the

  9. Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines

    International Nuclear Information System (INIS)

    Hunter, M.A.; Haghighat, A.

    1993-01-01

    Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)

  10. Optimal design of the heat pipe using TLBO (teaching–learning-based optimization) algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2015-01-01

    Heat pipe is a highly efficient and reliable heat transfer component. It is a closed container designed to transfer a large amount of heat in system. Since the heat pipe operates on a closed two-phase cycle, the heat transfer capacity is greater than for solid conductors. Also, the thermal response time is less than with solid conductors. The three major elemental parts of the rotating heat pipe are: a cylindrical evaporator, a truncated cone condenser, and a fixed amount of working fluid. In this paper, a recently proposed new stochastic advanced optimization algorithm called TLBO (Teaching–Learning-Based Optimization) algorithm is used for single objective as well as multi-objective design optimization of heat pipe. It is easy to implement, does not make use of derivatives and it can be applied to unconstrained or constrained problems. Two examples of heat pipe are presented in this paper. The results of application of TLBO algorithm for the design optimization of heat pipe are compared with the NPGA (Niched Pareto Genetic Algorithm), GEM (Grenade Explosion Method) and GEO (Generalized External optimization). It is found that the TLBO algorithm has produced better results as compared to those obtained by using NPGA, GEM and GEO algorithms. - Highlights: • The TLBO (Teaching–Learning-Based Optimization) algorithm is used for the design and optimization of a heat pipe. • Two examples of heat pipe design and optimization are presented. • The TLBO algorithm is proved better than the other optimization algorithms in terms of results and the convergence

  11. Filter Factors of Truncated TLS Regularization with Multiple Observations

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, I.; Plešinger, Martin; Žáková, J.

    2017-01-01

    Roč. 62, č. 2 (2017), s. 105-120 ISSN 0862-7940 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : truncated total least squares * multiple right-hand sides * eigenvalues of rank-d update * ill-posed problem * regularization * filter factors Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.618, year: 2016 http://hdl.handle.net/10338.dmlcz/146698

  12. Identification of target genes for wild type and truncated HMGA2 in mesenchymal stem-like cells

    DEFF Research Database (Denmark)

    Henriksen, Jørn Mølgaard; Stabell, Marianne; Meza-Zepeda, Leonardo A

    2010-01-01

    The HMGA2 gene, coding for an architectural transcription factor involved in mesenchymal embryogenesis, is frequently deranged by translocation and/or amplification in mesenchymal tumours, generally leading to over-expression of shortened transcripts and a truncated protein.......The HMGA2 gene, coding for an architectural transcription factor involved in mesenchymal embryogenesis, is frequently deranged by translocation and/or amplification in mesenchymal tumours, generally leading to over-expression of shortened transcripts and a truncated protein....

  13. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)

    2015-05-15

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  14. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    International Nuclear Information System (INIS)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y.; Pullepu, Babuji

    2015-01-01

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  15. Increased infectivity in human cells and resistance to antibody-mediated neutralization by truncation of the SIV gp41 cytoplasmic tail

    Directory of Open Access Journals (Sweden)

    Takeo eKuwata

    2013-05-01

    Full Text Available The role of antibodies in protecting the host from human immunodeficiency virus type 1 (HIV-1 infection is of considerable interest, particularly because the RV144 trial results suggest that antibodies contribute to protection. Although infection of nonhuman primates with simian immunodeficiency virus (SIV is commonly used as an animal model of HIV-1 infection, the viral epitopes that elicit potent and broad neutralizing antibodies to SIV have not been identified. We isolated a monoclonal antibody (MAb B404 that potently and broadly neutralizes various SIV strains. B404 targets a conformational epitope comprising the V3 and V4 loops of Env that intensely exposed when Env binds CD4. B404-resistant variants were obtained by passaging viruses in the presence of increasing concentration of B404 in PM1/CCR5 cells. Genetic analysis revealed that the Q733stop mutation, which truncates the cytoplasmic tail of gp41, was the first major substitution in Env during passage. The maximal inhibition by B404 and other MAbs were significantly decreased against a recombinant virus with a gp41 truncation compared with the parental SIVmac316. This indicates that the gp41 truncation was associated with resistance to antibody-mediated neutralization. The infectivities of the recombinant virus with the gp41 truncation were 7900-fold, 1000-fold, and 140-fold higher than those of SIVmac316 in PM1, PM1/CCR5, and TZM-bl cells, respectively. Immunoblotting analysis revealed that the gp41 truncation enhanced the incorporation of Env into virions. The effect of the gp41 truncation on infectivity was not obvious in the HSC-F macaque cell line, although the resistance of viruses harboring the gp41 truncation to neutralization was maintained. These results suggest that viruses with a truncated gp41 cytoplasmic tail were selected by increased infectivity in human cells and by acquiring resistance to neutralizing antibody.

  16. The lamppost model: effects of photon trapping, the bottom lamp and disc truncation

    Science.gov (United States)

    Niedźwiecki, Andrzej; Zdziarski, Andrzej A.

    2018-04-01

    We study the lamppost model, in which the primary X-ray sources in accreting black-hole systems are located symmetrically on the rotation axis on both sides of the black hole surrounded by an accretion disc. We show the importance of the emission of the source on the opposite side to the observer. Due to gravitational light bending, its emission can increase the direct (i.e., not re-emitted by the disc) flux by as much as an order of magnitude. This happens for near to face-on observers when the disc is even moderately truncated. For truncated discs, we also consider effects of emission of the top source gravitationally bent around the black hole. We also present results for the attenuation of the observed radiation with respect to that emitted by the lamppost as functions of the lamppost height, black-hole spin and the degree of disc truncation. This attenuation, which is due to the time dilation, gravitational redshift and the loss of photons crossing the black-hole horizon, can be as severe as by several orders of magnitude for low lamppost heights. We also consider the contribution to the observed flux due to re-emission by optically-thick matter within the innermost stable circular orbit.

  17. Spectroscopic characterization of a truncated hemoglobin from the nitrogen-fixing bacterium Herbaspirillum seropedicae.

    Science.gov (United States)

    Razzera, Guilherme; Vernal, Javier; Baruh, Debora; Serpa, Viviane I; Tavares, Carolina; Lara, Flávio; Souza, Emanuel M; Pedrosa, Fábio O; Almeida, Fábio C L; Terenzi, Hernán; Valente, Ana Paula

    2008-09-01

    The Herbaspirillum seropedicae genome sequence encodes a truncated hemoglobin typical of group II (Hs-trHb1) members of this family. We show that His-tagged recombinant Hs-trHb1 is monomeric in solution, and its optical spectrum resembles those of previously reported globins. NMR analysis allowed us to assign heme substituents. All data suggest that Hs-trHb1 undergoes a transition from an aquomet form in the ferric state to a hexacoordinate low-spin form in the ferrous state. The close positions of Ser-E7, Lys-E10, Tyr-B10, and His-CD1 in the distal pocket place them as candidates for heme coordination and ligand regulation. Peroxide degradation kinetics suggests an easy access to the heme pocket, as the protein offered no protection against peroxide degradation when compared with free heme. The high solvent exposure of the heme may be due to the presence of a flexible loop in the access pocket, as suggested by a structural model obtained by using homologous globins as templates. The truncated hemoglobin described here has unique features among truncated hemoglobins and may function in the facilitation of O(2) transfer and scavenging, playing an important role in the nitrogen-fixation mechanism.

  18. Truncation of a mannanase from Trichoderma harzianum improves its enzymatic properties and expression efficiency in Trichoderma reesei.

    Science.gov (United States)

    Wang, Juan; Zeng, Desheng; Liu, Gang; Wang, Shaowen; Yu, Shaowen

    2014-01-01

    To obtain high expression efficiency of a mannanase gene, ThMan5A, cloned from Trichoderma harzianum MGQ2, both the full-length gene and a truncated gene (ThMan5AΔCBM) that contains only the catalytic domain, were expressed in Trichoderma reesei QM9414 using the strong constitutive promoter of the gene encoding pyruvate decarboxylase (pdc), and purified to homogeneity, respectively. We found that truncation of the gene improved its expression efficiency as well as the enzymatic properties of the encoded protein. The recombinant strain expressing ThMan5AΔCBM produced 2,460 ± 45.1 U/ml of mannanase activity in the culture supernatant; 2.3-fold higher than when expressing the full-length ThMan5A gene. In addition, the truncated mannanase had superior thermostability compared with the full-length enzyme and retained 100 % of its activity after incubation at 60 °C for 48 h. Our results clearly show that the truncated ThMan5A enzyme exhibited improved characteristics both in expression efficiency and in its thermal stability. These characteristics suggest that ThMan5AΔCBM has potential applications in the food, feed, paper, and pulp industries.

  19. Research on Segmentation Monitoring Control of IA-RWA Algorithm with Probe Flow

    Science.gov (United States)

    Ren, Danping; Guo, Kun; Yao, Qiuyan; Zhao, Jijun

    2018-04-01

    The impairment-aware routing and wavelength assignment algorithm with probe flow (P-IA-RWA) can make an accurate estimation for the transmission quality of the link when the connection request comes. But it also causes some problems. The probe flow data introduced in the P-IA-RWA algorithm can result in the competition for wavelength resources. In order to reduce the competition and the blocking probability of the network, a new P-IA-RWA algorithm with segmentation monitoring-control mechanism (SMC-P-IA-RWA) is proposed. The algorithm would reduce the holding time of network resources for the probe flow. It segments the candidate path suitably for the data transmitting. And the transmission quality of the probe flow sent by the source node will be monitored in the endpoint of each segment. The transmission quality of data can also be monitored, so as to make the appropriate treatment to avoid the unnecessary probe flow. The simulation results show that the proposed SMC-P-IA-RWA algorithm can effectively reduce the blocking probability. It brings a better solution to the competition for resources between the probe flow and the main data to be transferred. And it is more suitable for scheduling control in the large-scale network.

  20. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Directory of Open Access Journals (Sweden)

    Shaoming Pan

    Full Text Available Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  1. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Science.gov (United States)

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  2. A comparative analysis of clustering algorithms: O{sub 2} migration in truncated hemoglobin I from transition networks

    Energy Technology Data Exchange (ETDEWEB)

    Cazade, Pierre-André; Berezovska, Ganna; Meuwly, Markus, E-mail: m.meuwly@unibas.ch [Department of Chemistry, University of Basel, Klingelbergstrasse 80, CH-4056 Basel (Switzerland); Zheng, Wenwei; Clementi, Cecilia [Department of Chemistry, Rice University, 6100 Main St., Houston, Texas 77005 (United States); Prada-Gracia, Diego; Rao, Francesco [School of Soft Matter Research, Freiburg Institute for Advanced Studies, Albertstrasse 19, 79104 Freiburg im Breisgau (Germany)

    2015-01-14

    The ligand migration network for O{sub 2}–diffusion in truncated Hemoglobin N is analyzed based on three different clustering schemes. For coordinate-based clustering, the conventional k–means and the kinetics-based Markov Clustering (MCL) methods are employed, whereas the locally scaled diffusion map (LSDMap) method is a collective-variable-based approach. It is found that all three methods agree well in their geometrical definition of the most important docking site, and all experimentally known docking sites are recovered by all three methods. Also, for most of the states, their population coincides quite favourably, whereas the kinetics of and between the states differs. One of the major differences between k–means and MCL clustering on the one hand and LSDMap on the other is that the latter finds one large primary cluster containing the Xe1a, IS1, and ENT states. This is related to the fact that the motion within the state occurs on similar time scales, whereas structurally the state is found to be quite diverse. In agreement with previous explicit atomistic simulations, the Xe3 pocket is found to be a highly dynamical site which points to its potential role as a hub in the network. This is also highlighted in the fact that LSDMap cannot identify this state. First passage time distributions from MCL clusterings using a one- (ligand-position) and two-dimensional (ligand-position and protein-structure) descriptor suggest that ligand- and protein-motions are coupled. The benefits and drawbacks of the three methods are discussed in a comparative fashion and highlight that depending on the questions at hand the best-performing method for a particular data set may differ.

  3. Phencyclidine block of calcium current in isolated guinea-pig hippocampal neurones.

    Science.gov (United States)

    Ffrench-Mullen, J M; Rogawski, M A

    1992-10-01

    apparent extent of inactivation of the Ca2+ channel current during prolonged voltage steps. This increase in apparent inactivation was more pronounced at depolarized potentials. Inactivation at -10 mV proceeded in two exponential phases; PCP had little effect on the fast decay phase and caused a moderate speeding of the slow decay phase. Although block of the activated state evolved on the same time scale as inactivation, the apparent rate of inactivation was not increased in a concentration-dependent fashion by PCP indicating that the block does not occur by a conventional open channel mechanism.(ABSTRACT TRUNCATED AT 400 WORDS)

  4. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  5. Blunt traumatic axillary artery truncation, in the absence of associated fracture.

    Science.gov (United States)

    Bokser, Emily; Caputo, William; Hahn, Barry; Greenstein, Josh

    2018-02-01

    Axillary artery injuries can be associated with both proximal humeral fractures (Naouli et al., 2016; Ng et al., 2016) [1,2] as well as shoulder dislocations (Leclerc et al., 2017; Karnes et al., 2016) [3,4]. We report a rare case of an isolated axillary artery truncation following blunt trauma without any associated fracture or dislocation. A 58-year-old male presented to the emergency department for evaluation after falling on his outstretched right arm. The patient was found to have an absent right radial pulse with decreased sensation to the right arm. Point of care ultrasound showed findings suspicious for traumatic axillary artery injury, and X-rays did not demonstrate any fracture. Computed tomography with angiography confirmed axillary artery truncation with active extravasation. The patient underwent successful vascular repair with an axillary artery bypass. Although extremity injuries are common in emergency departments, emergency physicians need to recognize the risk for vascular injuries, even without associated fracture or dislocation. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  7. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    International Nuclear Information System (INIS)

    Aarle, Wim van; Palenstijn, Willem Jan; De Beenhouwer, Jan; Altantzis, Thomas; Bals, Sara; Batenburg, K. Joost; Sijbers, Jan

    2015-01-01

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series

  8. Firewalls as artefacts of inconsistent truncations of quantum geometries

    Energy Technology Data Exchange (ETDEWEB)

    Germani, Cristiano [Max-Planck-Institut fuer Physik, Muenchen (Germany); Arnold Sommerfeld Center, Ludwig-Maximilians-University, Muenchen (Germany); Institut de Ciencies del Cosmos, Universitat de Barcelona (Spain); Sarkar, Debajyoti [Max-Planck-Institut fuer Physik, Muenchen (Germany); Arnold Sommerfeld Center, Ludwig-Maximilians-University, Muenchen (Germany)

    2016-01-15

    In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order O(e{sup -S}), inexorably leads to a ''localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by (e{sup -S}) effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like (g{sup tt}e{sup -S}). Therefore, while at a distance far away from the horizon, where g{sup tt} ∼ 1, quantum corrections are perturbative, they do diverge close to the horizon, where g{sup tt} → ∞. Nevertheless, these ''corrections'' nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  9. Firewalls as artefacts of inconsistent truncations of quantum geometries

    International Nuclear Information System (INIS)

    Germani, Cristiano; Sarkar, Debajyoti

    2016-01-01

    In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order O(e -S ), inexorably leads to a ''localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by (e -S ) effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like (g tt e -S ). Therefore, while at a distance far away from the horizon, where g tt ∼ 1, quantum corrections are perturbative, they do diverge close to the horizon, where g tt → ∞. Nevertheless, these ''corrections'' nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  10. Saturation Detection-Based Blocking Scheme for Transformer Differential Protection

    Directory of Open Access Journals (Sweden)

    Byung Eun Lee

    2014-07-01

    Full Text Available This paper describes a current differential relay for transformer protection that operates in conjunction with a core saturation detection-based blocking algorithm. The differential current for the magnetic inrush or over-excitation has a point of inflection at the start and end of each saturation period of the transformer core. At these instants, discontinuities arise in the first-difference function of the differential current. The second- and third-difference functions convert the points of inflection into pulses, the magnitudes of which are large enough to detect core saturation. The blocking signal is activated if the third-difference of the differential current is larger than the threshold and is maintained for one cycle. In addition, a method to discriminate between transformer saturation and current transformer (CT saturation is included. The performance of the proposed blocking scheme was compared with that of a conventional harmonic blocking method. The test results indicate that the proposed scheme successfully discriminates internal faults even with CT saturation from the magnetic inrush, over-excitation, and external faults with CT saturation, and can significantly reduce the operating time delay of the relay.

  11. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao

    2017-11-29

    In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.

  12. Maltose binding protein-fusion enhances the bioactivity of truncated forms of pig myostatin propeptide produced in E. coli.

    Directory of Open Access Journals (Sweden)

    Sang Beum Lee

    Full Text Available Myostatin (MSTN is a potent negative regulator of skeletal muscle growth. MSTN propeptide (MSTNpro inhibits MSTN binding to its receptor through complex formation with MSTN, implying that MSTNpro can be a useful agent to improve skeletal muscle growth in meat-producing animals. Four different truncated forms of pig MSTNpro containing N-terminal maltose binding protein (MBP as a fusion partner were expressed in E. coli, and purified by the combination of affinity chromatography and gel filtration. The MSTN-inhibitory capacities of these proteins were examined in an in vitro gene reporter assay. A MBP-fused, truncated MSTNpro containing residues 42-175 (MBP-Pro42-175 exhibited the same MSTN-inhibitory potency as the full sequence MSTNpro. Truncated MSTNpro proteins containing either residues 42-115 (MBP-Pro42-115 or 42-98 (MBP-Pro42-98 also exhibited MSTN-inhibitory capacity even though the potencies were significantly lower than that of full sequence MSTNpro. In pull-down assays, MBP-Pro42-175, MBP-Pro42-115, and MBP-Pro42-98 demonstrated their binding to MSTN. MBP was removed from the truncated MSTNpro proteins by incubation with factor Xa to examine the potential role of MBP on MSTN-inhibitory capacity of those proteins. Removal of MBP from MBP-Pro42-175 and MBP-Pro42-98 resulted in 20-fold decrease in MSTN-inhibitory capacity of Pro42-175 and abolition of MSTN-inhibitory capacity of Pro42-98, indicating that MBP as fusion partner enhanced the MSTN-inhibitory capacity of those truncated MSTNpro proteins. In summary, this study shows that MBP is a very useful fusion partner in enhancing MSTN-inhibitory potency of truncated forms of MSTNpro proteins, and MBP-fused pig MSTNpro consisting of amino acid residues 42-175 is sufficient to maintain the full MSTN-inhibitory capacity.

  13. The Breakdown: Hillslope Sources of Channel Blocks in Bedrock Landscapes

    Science.gov (United States)

    Selander, B.; Anderson, S. P.; Rossi, M.

    2017-12-01

    Block delivery from hillslopes is a poorly understood process that influences bedrock channel incision rates and shapes steep terrain. Previous studies demonstrate that hillslope sediment delivery rate and grain size increases with channel downcutting rate or fracture density (Attal et al., 2015, ESurf). However, blocks that exceed the competence of the channel can inhibit incision. In Boulder Creek, a bedrock channel in the Colorado Front Range, large boulders (>1 m diameter) are most numerous in the steepest channel reaches; their distribution seems to reflect autogenic channel-hillslope feedback between incision rate and block delivery (Shobe et al., 2016, GRL). It is clear that the processes, rates of production, and delivery of large blocks from hillslopes into channels are critical to our understanding of steep terrain evolution. Fundamental questions are 1) whether block production or block delivery is rate limiting, 2) what mechanisms release blocks, and 3) how block production and transport affect slope morphology. As a first step, we map rock outcrops on the granodiorite hillslopes lining Boulder Creek within Boulder Canyon using a high resolution DEM. Our algorithm uses high ranges of curvature values in conjunction with slopes steeper than the angle of repose to quickly identify rock outcrops. We field verified mapped outcrop and sediment-mantled locations on hillslopes above and below the channel knickzone. We find a greater abundance of exposed rock outcrops on steeper hillslopes in Boulder Canyon. Additionally, we find that channel reaches with large in-channel blocks are located at the base of hillslopes with large areas of exposed bedrock, while reaches lacking large in-channel blocks tend to be at the base of predominately soil mantled and forested hillslopes. These observations support the model of block delivery and channel incision of Shobe et al. (2016, GRL). Moreover, these results highlight the conundrum of how rapid channel incision is

  14. One-dimensional Lagrangian implicit hydrodynamic algorithm for Inertial Confinement Fusion applications

    Energy Technology Data Exchange (ETDEWEB)

    Ramis, Rafael, E-mail: rafael.ramis@upm.es

    2017-02-01

    A new one-dimensional hydrodynamic algorithm, specifically developed for Inertial Confinement Fusion (ICF) applications, is presented. The scheme uses a fully conservative Lagrangian formulation in planar, cylindrical, and spherically symmetric geometries, and supports arbitrary equations of state with separate ion and electron components. Fluid equations are discretized on a staggered grid and stabilized by means of an artificial viscosity formulation. The space discretized equations are advanced in time using an implicit algorithm. The method includes several numerical parameters that can be adjusted locally. In regions with low Courant–Friedrichs–Lewy (CFL) number, where stability is not an issue, they can be adjusted to optimize the accuracy. In typical problems, the truncation error can be reduced by a factor between 2 to 10 in comparison with conventional explicit algorithms. On the other hand, in regions with high CFL numbers, the parameters can be set to guarantee unconditional stability. The method can be integrated into complex ICF codes. This is demonstrated through several examples covering a wide range of situations: from thermonuclear ignition physics, where alpha particles are managed as an additional species, to low intensity laser–matter interaction, where liquid–vapor phase transitions occur.

  15. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense

    Directory of Open Access Journals (Sweden)

    C.Y. Nishikawa

    2012-02-01

    Full Text Available Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ54 co-factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL. A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH4Cl but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription.

  16. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense

    Energy Technology Data Exchange (ETDEWEB)

    Nishikawa, C.Y.; Araújo, L.M.; Kadowaki, M.A.S.; Monteiro, R.A.; Steffens, M.B.R.; Pedrosa, F.O.; Souza, E.M.; Chubatsu, L.S. [Departamento de Bioquímica e Biologia Molecular, Universidade Federal do Paraná, Curitiba, PR (Brazil)

    2012-01-27

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ{sup 54} factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH{sub 4}Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription.

  17. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense.

    Science.gov (United States)

    Nishikawa, C Y; Araújo, L M; Kadowaki, M A S; Monteiro, R A; Steffens, M B R; Pedrosa, F O; Souza, E M; Chubatsu, L S

    2012-02-01

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ(54) co-factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH(4)Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription.

  18. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense

    International Nuclear Information System (INIS)

    Nishikawa, C.Y.; Araújo, L.M.; Kadowaki, M.A.S.; Monteiro, R.A.; Steffens, M.B.R.; Pedrosa, F.O.; Souza, E.M.; Chubatsu, L.S.

    2012-01-01

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ 54 factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH 4 Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription

  19. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    Science.gov (United States)

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  20. Organisation and melting of solution grown truncated lozenge polyethylene single crystals

    NARCIS (Netherlands)

    Loos, J.; Tian, M.

    2003-01-01

    Morphological features and the melting behaviour of truncated lozenge crystals have been studied. For the crystals investigated, the heights of the (110) and the (200) sectors were measured to be 14.5 and 12.7 nm, respectively, using atomic force microscopy (AFM) in contact and non-contact mode.

  1. Truncated SALL1 Impedes Primary Cilia Function in Townes-Brocks Syndrome

    DEFF Research Database (Denmark)

    Bozal-Basterra, Laura; Martín-Ruíz, Itziar; Pirone, Lucia

    2018-01-01

    by mutations in the gene encoding the transcriptional repressor SALL1 and is associated with the presence of a truncated protein that localizes to the cytoplasm. Here, we provide evidence that SALL1 mutations might cause TBS by means beyond its transcriptional capacity. By using proximity proteomics, we show...

  2. Reduction of snapshots for MIMO radar detection by block/group orthogonal matching pursuit

    KAUST Repository

    Ali, Hussain El Hosiny; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2014-01-01

    localization problem with compressive sensing. Specifically, we try to solve the problem of estimation of target location in MIMO radar by group and block sparsity algorithms. It will lead us to a reduced number of snapshots required and also we can achieve

  3. Genetic algorithm optimization of atomic clusters

    International Nuclear Information System (INIS)

    Morris, J.R.; Deaven, D.M.; Ho, K.M.; Wang, C.Z.; Pan, B.C.; Wacker, J.G.; Turner, D.E.; Iowa State Univ., Ames, IA

    1996-01-01

    The authors have been using genetic algorithms to study the structures of atomic clusters and related problems. This is a problem where local minima are easy to locate, but barriers between the many minima are large, and the number of minima prohibit a systematic search. They use a novel mating algorithm that preserves some of the geometrical relationship between atoms, in order to ensure that the resultant structures are likely to inherit the best features of the parent clusters. Using this approach, they have been able to find lower energy structures than had been previously obtained. Most recently, they have been able to turn around the building block idea, using optimized structures from the GA to learn about systematic structural trends. They believe that an effective GA can help provide such heuristic information, and (conversely) that such information can be introduced back into the algorithm to assist in the search process

  4. External Memory Algorithms for Diameter and All-Pair Shortest-Paths on Sparse Graphs

    DEFF Research Database (Denmark)

    Arge, Lars; Meyer, Ulrich; Toma, Laura

    2004-01-01

    We present several new external-memory algorithms for finding all-pairs shortest paths in a V -node, Eedge undirected graph. For all-pairs shortest paths and diameter in unweighted undirected graphs we present cache-oblivious algorithms with O(V · E B logM B E B) I/Os, where B is the block-size a...

  5. One Terminal Digital Algorithm for Adaptive Single Pole Auto-Reclosing Based on Zero Sequence Voltage

    Directory of Open Access Journals (Sweden)

    S. Jamali

    2008-10-01

    Full Text Available This paper presents an algorithm for adaptive determination of the dead timeduring transient arcing faults and blocking automatic reclosing during permanent faults onoverhead transmission lines. The discrimination between transient and permanent faults ismade by the zero sequence voltage measured at the relay point. If the fault is recognised asan arcing one, then the third harmonic of the zero sequence voltage is used to evaluate theextinction time of the secondary arc and to initiate reclosing signal. The significantadvantage of this algorithm is that it uses an adaptive threshold level and therefore itsperformance is independent of fault location, line parameters and the system operatingconditions. The proposed algorithm has been successfully tested under a variety of faultlocations and load angles on a 400KV overhead line using Electro-Magnetic TransientProgram (EMTP. The test results validate the algorithm ability in determining thesecondary arc extinction time during transient faults as well as blocking unsuccessfulautomatic reclosing during permanent faults.

  6. 31 CFR 595.301 - Blocked account; blocked property.

    Science.gov (United States)

    2010-07-01

    ... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY TERRORISM SANCTIONS REGULATIONS General Definitions § 595.301 Blocked account; blocked property. The terms blocked account and blocked...

  7. Binaural noise reduction via cue-preserving MMSE filter and adaptive-blocking-based noise PSD estimation

    Science.gov (United States)

    Azarpour, Masoumeh; Enzner, Gerald

    2017-12-01

    Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome

  8. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    Science.gov (United States)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  9. Nerve Blocks

    Science.gov (United States)

    ... News Physician Resources Professions Site Index A-Z Nerve Blocks A nerve block is an injection to ... the limitations of Nerve Block? What is a Nerve Block? A nerve block is an anesthetic and/ ...

  10. A novel algorithm for automatic localization of human eyes

    Institute of Scientific and Technical Information of China (English)

    Liang Tao (陶亮); Juanjuan Gu (顾涓涓); Zhenquan Zhuang (庄镇泉)

    2003-01-01

    Based on geometrical facial features and image segmentation, we present a novel algorithm for automatic localization of human eyes in grayscale or color still images with complex background. Firstly, a determination criterion of eye location is established by the prior knowledge of geometrical facial features. Secondly,a range of threshold values that would separate eye blocks from others in a segmented face image (I.e.,a binary image) are estimated. Thirdly, with the progressive increase of the threshold by an appropriate step in that range, once two eye blocks appear from the segmented image, they will be detected by the determination criterion of eye location. Finally, the 2D correlation coefficient is used as a symmetry similarity measure to check the factuality of the two detected eyes. To avoid the background interference, skin color segmentation can be applied in order to enhance the accuracy of eye detection. The experimental results demonstrate the high efficiency of the algorithm and correct localization rate.

  11. Memetic algorithms for de novo motif-finding in biomedical sequences.

    Science.gov (United States)

    Bi, Chengpeng

    2012-09-01

    The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary micro

  12. Hard Ware Implementation of Diamond Search Algorithm for Motion Estimation and Object Tracking

    International Nuclear Information System (INIS)

    Hashimaa, S.M.; Mahmoud, I.I.; Elazm, A.A.

    2009-01-01

    Object tracking is very important task in computer vision. Fast search algorithms emerged as important search technique to achieve real time tracking results. To enhance the performance of these algorithms, we advocate the hardware implementation of such algorithms. Diamond search block matching motion estimation has been proposed recently to reduce the complexity of motion estimation. In this paper we selected the diamond search algorithm (DS) for implementation using FPGA. This is due to its fundamental role in all fast search patterns. The proposed architecture is simulated and synthesized using Xilinix and modelsim soft wares. The results agree with the algorithm implementation in Matlab environment.

  13. Analysis of blocking probability for OFDM-based variable bandwidth optical network

    Science.gov (United States)

    Gong, Lei; Zhang, Jie; Zhao, Yongli; Lin, Xuefeng; Wu, Yuyao; Gu, Wanyi

    2011-12-01

    Orthogonal Frequency Division Multiplexing (OFDM) has recently been proposed as a modulation technique. For optical networks, because of its good spectral efficiency, flexibility, and tolerance to impairments, optical OFDM is much more flexible compared to traditional WDM systems, enabling elastic bandwidth transmissions, and optical networking is the future trend of development. In OFDM-based optical network the research of blocking rate has very important significance for network assessment. Current research for WDM network is basically based on a fixed bandwidth, in order to accommodate the future business and the fast-changing development of optical network, our study is based on variable bandwidth OFDM-based optical networks. We apply the mathematical analysis and theoretical derivation, based on the existing theory and algorithms, research blocking probability of the variable bandwidth of optical network, and then we will build a model for blocking probability.

  14. Family losses following truncation selection in populations of half-sib families

    Science.gov (United States)

    J. H. Roberds; G. Namkoong; H. Kang

    1980-01-01

    Family losses during truncation selection may be sizable in populations of half-sib families. Substantial losses may occur even in populations containing little or no variation among families. Heavier losses will occur, however, under conditions of high heritability where there is considerable family variation. Standard deviations and therefore variances of family loss...

  15. Algorithms for parallel flow solvers on message passing architectures

    Science.gov (United States)

    Vanderwijngaart, Rob F.

    1995-01-01

    The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those

  16. Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Vecharynski, Eugene [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Brabec, Jiri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Shao, Meiyue [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Govind, Niranjan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Environmental Molecular Sciences Lab.; Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division

    2017-12-01

    We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.

  17. System Performance of Concatenated STBC and Block Turbo Codes in Dispersive Fading Channels

    Directory of Open Access Journals (Sweden)

    Kam Tai Chan

    2005-05-01

    Full Text Available A new scheme of concatenating the block turbo code (BTC with the space-time block code (STBC for an OFDM system in dispersive fading channels is investigated in this paper. The good error correcting capability of BTC and the large diversity gain characteristics of STBC can be achieved simultaneously. The resulting receiver outperforms the iterative convolutional Turbo receiver with maximum- a-posteriori-probability expectation maximization (MAP-EM algorithm. Because of its ability to perform the encoding and decoding processes in parallel, the proposed system is easy to implement in real time.

  18. Optimal and efficient decoding of concatenated quantum block codes

    International Nuclear Information System (INIS)

    Poulin, David

    2006-01-01

    We consider the problem of optimally decoding a quantum error correction code--that is, to find the optimal recovery procedure given the outcomes of partial ''check'' measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead

  19. On Data and Parameter Estimation Using the Variational Bayesian EM-algorithm for Block-fading Frequency-selective MIMO Channels

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.; Larsen, Jan

    2006-01-01

    A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore...

  20. RSA Algorithm. Features of the C # Object Programming Implementation

    Directory of Open Access Journals (Sweden)

    Elena V. Staver

    2012-08-01

    Full Text Available Public-key algorithms depend on the encryption key and the decoding key, connected with the first one. For data public key encryption, the text is divided into blocks, each of which is represented as a number. To decrypt the message a secret key is used.

  1. LGI2 truncation causes a remitting focal epilepsy in dogs.

    Directory of Open Access Journals (Sweden)

    Eija H Seppälä

    2011-07-01

    Full Text Available One quadrillion synapses are laid in the first two years of postnatal construction of the human brain, which are then pruned until age 10 to 500 trillion synapses composing the final network. Genetic epilepsies are the most common neurological diseases with onset during pruning, affecting 0.5% of 2-10-year-old children, and these epilepsies are often characterized by spontaneous remission. We previously described a remitting epilepsy in the Lagotto romagnolo canine breed. Here, we identify the gene defect and affected neurochemical pathway. We reconstructed a large Lagotto pedigree of around 34 affected animals. Using genome-wide association in 11 discordant sib-pairs from this pedigree, we mapped the disease locus to a 1.7 Mb region of homozygosity in chromosome 3 where we identified a protein-truncating mutation in the Lgi2 gene, a homologue of the human epilepsy gene LGI1. We show that LGI2, like LGI1, is neuronally secreted and acts on metalloproteinase-lacking members of the ADAM family of neuronal receptors, which function in synapse remodeling, and that LGI2 truncation, like LGI1 truncations, prevents secretion and ADAM interaction. The resulting epilepsy onsets at around seven weeks (equivalent to human two years, and remits by four months (human eight years, versus onset after age eight in the majority of human patients with LGI1 mutations. Finally, we show that Lgi2 is expressed highly in the immediate post-natal period until halfway through pruning, unlike Lgi1, which is expressed in the latter part of pruning and beyond. LGI2 acts at least in part through the same ADAM receptors as LGI1, but earlier, ensuring electrical stability (absence of epilepsy during pruning years, preceding this same function performed by LGI1 in later years. LGI2 should be considered a candidate gene for common remitting childhood epilepsies, and LGI2-to-LGI1 transition for mechanisms of childhood epilepsy remission.

  2. Block Pickard Models for Two-Dimensional Constraints

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2009-01-01

    In Pickard random fields (PRF), the probabilities of finite configurations and the entropy of the field can be calculated explicitly, but only very simple structures can be incorporated into such a field. Given two Markov chains describing a boundary, an algorithm is presented which determines...... for the domino tiling constraint represented by a quaternary alphabet. PRF models are also presented for higher order constraints, including the no isolated bits (n.i.b.) constraint, and a minimum distance 3 constraint by defining super symbols on blocks of binary symbols....

  3. A structure preserving Lanczos algorithm for computing the optical absorption spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Meiyue [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Div.; Jornada, Felipe H. da [Univ. of California, Berkeley, CA (United States). Dept. of Physics; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Materials Science Div.; Lin, Lin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Div.; Univ. of California, Berkeley, CA (United States). Dept. of Mathematics; Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Div.; Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Louie, Steven G. [Univ. of California, Berkeley, CA (United States). Dept. of Physics; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Materials Science Div.

    2016-11-16

    We present a new structure preserving Lanczos algorithm for approximating the optical absorption spectrum in the context of solving full Bethe-Salpeter equation without Tamm-Dancoff approximation. The new algorithm is based on a structure preserving Lanczos procedure, which exploits the special block structure of Bethe-Salpeter Hamiltonian matrices. A recently developed technique of generalized averaged Gauss quadrature is incorporated to accelerate the convergence. We also establish the connection between our structure preserving Lanczos procedure with several existing Lanczos procedures developed in different contexts. Numerical examples are presented to demonstrate the effectiveness of our Lanczos algorithm.

  4. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  5. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita; Richtarik, Peter

    2018-01-01

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\cal O}(1/\\epsilon)$, ${\\cal O}(1/\\sqrt{\\epsilon})$ and ${\\cal O}(\\log (1/\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  6. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita

    2018-02-12

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\\\cal O}(1/\\\\epsilon)$, ${\\\\cal O}(1/\\\\sqrt{\\\\epsilon})$ and ${\\\\cal O}(\\\\log (1/\\\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  7. Flash-Aware Page Replacement Algorithm

    Directory of Open Access Journals (Sweden)

    Guangxia Xu

    2014-01-01

    Full Text Available Due to the limited main memory resource of consumer electronics equipped with NAND flash memory as storage device, an efficient page replacement algorithm called FAPRA is proposed for NAND flash memory in the light of its inherent characteristics. FAPRA introduces an efficient victim page selection scheme taking into account the benefit-to-cost ratio for evicting each victim page candidate and the combined recency and frequency value, as well as the erase count of the block to which each page belongs. Since the dirty victim page often contains clean data that exist in both the main memory and the NAND flash memory based storage device, FAPRA only writes the dirty data within the victim page back to the NAND flash memory based storage device in order to reduce the redundant write operations. We conduct a series of trace-driven simulations and experimental results show that our proposed FAPRA algorithm outperforms the state-of-the-art algorithms in terms of page hit ratio, the number of write operations, runtime, and the degree of wear leveling.

  8. A Novel Truncated Form of Serum Amyloid A in Kawasaki Disease.

    Directory of Open Access Journals (Sweden)

    John C Whitin

    Full Text Available Kawasaki disease (KD is an acute vasculitis in children that can cause coronary artery abnormalities. Its diagnosis is challenging, and many cytokines, chemokines, acute phase reactants, and growth factors have failed evaluation as specific biomarkers to distinguish KD from other febrile illnesses. We performed protein profiling, comparing plasma from children with KD with febrile control (FC subjects to determine if there were specific proteins or peptides that could distinguish the two clinical states.Plasma from three independent cohorts from the blood of 68 KD and 61 FC subjects was fractionated by anion exchange chromatography, followed by surface-enhanced laser desorption ionization (SELDI mass spectrometry of the fractions. The mass spectra of KD and FC plasma samples were analyzed for peaks that were statistically significantly different.A mass spectrometry peak with a mass of 7,860 Da had high intensity in acute KD subjects compared to subacute KD (p = 0.0003 and FC (p = 7.9 x 10-10 subjects. We identified this peak as a novel truncated form of serum amyloid A with N-terminal at Lys-34 of the circulating form and validated its identity using a hybrid mass spectrum immunoassay technique. The truncated form of serum amyloid A was present in plasma of KD subjects when blood was collected in tubes containing protease inhibitors. This peak disappeared when the patients were examined after their symptoms resolved. Intensities of this peptide did not correlate with KD-associated laboratory values or with other mass spectrum peaks from the plasma of these KD subjects.Using SELDI mass spectrometry, we have discovered a novel truncated form of serum amyloid A that is elevated in the plasma of KD when compared with FC subjects. Future studies will evaluate its relevance as a diagnostic biomarker and its potential role in the pathophysiology of KD.

  9. A computational approach for fluid queues driven by truncated birth-death processes.

    NARCIS (Netherlands)

    Lenin, R.B.; Parthasarathy, P.R.

    2000-01-01

    In this paper, we analyze fluid queues driven by truncated birth-death processes with general birth and death rates. We compute the equilibrium distribution of the content of the fluid buffer by providing efficient numerical procedures to compute the eigenvalues and the eigenvectors of the

  10. A truncated Kv1.1 protein in the brain of the megencephaly mouse: expression and interaction

    Directory of Open Access Journals (Sweden)

    Århem Peter

    2005-11-01

    Full Text Available Abstract Background The megencephaly mouse, mceph/mceph, is epileptic and displays a dramatically increased brain volume and neuronal count. The responsible mutation was recently revealed to be an eleven base pair deletion, leading to a frame shift, in the gene encoding the potassium channel Kv1.1. The predicted MCEPH protein is truncated at amino acid 230 out of 495. Truncated proteins are usually not expressed since nonsense mRNAs are most often degraded. However, high Kv1.1 mRNA levels in mceph/mceph brain indicated that it escaped this control mechanism. Therefore, we hypothesized that the truncated Kv1.1 would be expressed and dysregulate other Kv1 subunits in the mceph/mceph mice. Results We found that the MCEPH protein is expressed in the brain of mceph/mceph mice. MCEPH was found to lack mature (Golgi glycosylation, but to be core glycosylated and trapped in the endoplasmic reticulum (ER. Interactions between MCEPH and other Kv1 subunits were studied in cell culture, Xenopus oocytes and the brain. MCEPH can form tetramers with Kv1.1 in cell culture and has a dominant negative effect on Kv1.2 and Kv1.3 currents in oocytes. However, it does not retain Kv1.2 in the ER of neurons. Conclusion The megencephaly mice express a truncated Kv1.1 in the brain, and constitute a unique tool to study Kv1.1 trafficking relevant for understanding epilepsy, ataxia and pathologic brain overgrowth.

  11. Non-linear buckling of an FGM truncated conical shell surrounded by an elastic medium

    International Nuclear Information System (INIS)

    Sofiyev, A.H.; Kuruoglu, N.

    2013-01-01

    In this paper, the non-linear buckling of the truncated conical shell made of functionally graded materials (FGMs) surrounded by an elastic medium has been studied using the large deformation theory with von Karman–Donnell-type of kinematic non-linearity. A two-parameter foundation model (Pasternak-type) is used to describe the shell–foundation interaction. The FGM properties are assumed to vary continuously through the thickness direction. The fundamental relations, the modified Donnell type non-linear stability and compatibility equations of the FGM truncated conical shell resting on the Pasternak-type elastic foundation are derived. By using the Superposition and Galerkin methods, the non-linear stability equations for the FGM truncated conical shell is solved. Finally, influences of variations of Winkler foundation stiffness and shear subgrade modulus of the foundation, compositional profiles and shell characteristics on the dimensionless critical non-linear axial load are investigated. The present results are compared with the available data for a special case. -- Highlights: • Nonlinear buckling of FGM conical shell surrounded by elastic medium is studied. • Pasternak foundation model is used to describe the shell–foundation interaction. • Nonlinear basic equations are derived. • Problem is solved by using Superposition and Galerkin methods. • Influences of various parameters on the nonlinear critical load are investigated

  12. Selective translational repression of truncated proteins from frameshift mutation-derived mRNAs in tumors.

    Directory of Open Access Journals (Sweden)

    Kwon Tae You

    2007-05-01

    Full Text Available Frameshift and nonsense mutations are common in tumors with microsatellite instability, and mRNAs from these mutated genes have premature termination codons (PTCs. Abnormal mRNAs containing PTCs are normally degraded by the nonsense-mediated mRNA decay (NMD system. However, PTCs located within 50-55 nucleotides of the last exon-exon junction are not recognized by NMD (NMD-irrelevant, and some PTC-containing mRNAs can escape from the NMD system (NMD-escape. We investigated protein expression from NMD-irrelevant and NMD-escape PTC-containing mRNAs by Western blotting and transfection assays. We demonstrated that transfection of NMD-irrelevant PTC-containing genomic DNA of MARCKS generates truncated protein. In contrast, NMD-escape PTC-containing versions of hMSH3 and TGFBR2 generate normal levels of mRNA, but do not generate detectable levels of protein. Transfection of NMD-escape mutant TGFBR2 genomic DNA failed to generate expression of truncated proteins, whereas transfection of wild-type TGFBR2 genomic DNA or mutant PTC-containing TGFBR2 cDNA generated expression of wild-type protein and truncated protein, respectively. Our findings suggest a novel mechanism of gene expression regulation for PTC-containing mRNAs in which the deleterious transcripts are regulated either by NMD or translational repression.

  13. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  14. Evolutionary algorithm based optimization of hydraulic machines utilizing a state-of-the-art block coupled CFD solver and parametric geometry and mesh generation tools

    Science.gov (United States)

    S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr

    2014-03-01

    An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.

  15. Evolutionary algorithm based optimization of hydraulic machines utilizing a state-of-the-art block coupled CFD solver and parametric geometry and mesh generation tools

    International Nuclear Information System (INIS)

    Kyriacou S; Kontoleontos E; Weissenberger S; Mangani L; Casartelli E; Skouteropoulou I; Gattringer M; Gehrer A; Buchmayr M

    2014-01-01

    An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure

  16. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  17. Chaos and noise in a truncated Toda potential

    International Nuclear Information System (INIS)

    Habib, S.; Kandrup, H.E.; Mahon, M.E.

    1996-01-01

    Results are reported from a numerical investigation of orbits in a truncated Toda potential that is perturbed by weak friction and noise. Aside from the perturbations displaying a simple scaling in the amplitude of the friction and noise, it is found that even very weak friction and noise can induce an extrinsic diffusion through cantori on a time scale that is much shorter than that associated with intrinsic diffusion in the unperturbed system. The results have applications in galactic dynamics and in the formation of a beam halo in charged particle beams. copyright 1996 The American Physical Society

  18. Vision-Based Bicycle Detection Using Multiscale Block Local Binary Pattern

    Directory of Open Access Journals (Sweden)

    Hongyu Hu

    2014-01-01

    Full Text Available Bicycle traffic has heavy proportion among all travel modes in some developing countries, which is crucial for urban traffic control and management as well as facility design. This paper proposes a real-time multiple bicycle detection algorithm based on video. At first, an effective feature called multiscale block local binary pattern (MBLBP is extracted for representing the moving object, which is a well-classified feature to distinguish between bicycles and nonbicycles; then, a cascaded bicycle classifier trained by AdaBoost algorithm is proposed, which has a good computation efficiency. Finally, the method is tested with video sequence captured from the real-world traffic scenario. The bicycles in the test scenario are successfully detected.

  19. On the truncation of the azimuthal mode spectrum of high-order probes in probe-corrected spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Laitinen, Tommi

    2011-01-01

    Azimuthal mode (m mode) truncation of a high-order probe pattern in probe-corrected spherical near-field antenna measurements is studied in this paper. The results of this paper provide rules for appropriate and sufficient m-mode truncation for non-ideal first-order probes and odd-order probes wi...

  20. Near-Optimal Resource Allocation in Cooperative Cellular Networks Using Genetic Algorithms

    OpenAIRE

    Luo, Zihan; Armour, Simon; McGeehan, Joe

    2015-01-01

    This paper shows how a genetic algorithm can be used as a method of obtaining the near-optimal solution of the resource block scheduling problem in a cooperative cellular network. An exhaustive search is initially implementedto guarantee that the optimal result, in terms of maximizing the bandwidth efficiency of the overall network, is found, and then the genetic algorithm with the properly selected termination conditions is used in the same network. The simulation results show that the genet...