Adaptive bit plane quadtree-based block truncation coding for image compression
Li, Shenda; Wang, Jin; Zhu, Qing
2018-04-01
Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.
Testing block subdivision algorithms on block designs
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
Phase retrieval via incremental truncated amplitude flow algorithm
Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao
2017-10-01
This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.
Reuter, Matthew G; Hill, Judith C
2012-01-01
We present an algorithm for computing any block of the inverse of a block tridiagonal, nearly block Toeplitz matrix (defined as a block tridiagonal matrix with a small number of deviations from the purely block Toeplitz structure). By exploiting both the block tridiagonal and the nearly block Toeplitz structures, this method scales independently of the total number of blocks in the matrix and linearly with the number of deviations. Numerical studies demonstrate this scaling and the advantages of our method over alternatives.
H.B. Kekre; Sudeep Thepade; Karan Dhamejani; Sanchit Khandelwal; Adnan Azmi
2012-01-01
The paper presents a performance analysis of Multilevel Block Truncation Coding based Face Recognition among widely used color spaces. In [1], Multilevel Block Truncation Coding was applied on the RGB color space up to four levels for face recognition. Better results were obtained when the proposed technique was implemented using Kekre’s LUV (K’LUV) color space [25]. This was the motivation to test the proposed technique using assorted color spaces. For experimental analysis, two face databas...
Masuyama, Hiroyuki
2015-01-01
This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...
A Line Search Multilevel Truncated Newton Algorithm for Computing the Optical Flow
Lluís Garrido
2015-06-01
Full Text Available We describe the implementation details and give the experimental results of three optimization algorithms for dense optical flow computation. In particular, using a line search strategy, we evaluate the performance of the unilevel truncated Newton method (LSTN, a multiresolution truncated Newton (MR/LSTN and a full multigrid truncated Newton (FMG/LSTN. We use three image sequences and four models of optical flow for performance evaluation. The FMG/LSTN algorithm is shown to lead to better optical flow estimation with less computational work than both the LSTN and MR/LSTN algorithms.
Masuyama, Hiroyuki
2014-01-01
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...
Gordiyenko V. I.
2009-02-01
Full Text Available A test diagram of the microcontroller-type resolver-to-digital converter and algorithms for impediments filtration therein are developed. Experimental verification of the α-truncated mean algorithm intended for the suppression of impulse and noise interference is conducted. The test results are given.
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation
Chen, Ming; Yu, Hengyong
2015-01-01
Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units
Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm
Jin, Ick Hoon
2013-10-01
The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
MULTISTAGE BITRATE REDUCTION IN ABSOLUTE MOMENT BLOCK TRUNCATION CODING FOR IMAGE COMPRESSION
S. Vimala
2012-05-01
Full Text Available Absolute Moment Block Truncation Coding (AMBTC is one of the lossy image compression techniques. The computational complexity involved is less and the quality of the reconstructed images is appreciable. The normal AMBTC method requires 2 bits per pixel (bpp. In this paper, two novel ideas have been incorporated as part of AMBTC method to improve the coding efficiency. Generally, the quality degrades with the reduction in the bit-rate. But in the proposed method, the quality of the reconstructed image increases with the decrease in the bit-rate. The proposed method has been tested with standard images like Lena, Barbara, Bridge, Boats and Cameraman. The results obtained are better than that of the existing AMBTC method in terms of bit-rate and the quality of the reconstructed images.
Liu Yu-Sun
2011-01-01
Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.
Aniba, Ghassane; Aissa, Sonia
2011-01-01
works addressed the link layer performance of AM with truncated ARQ but without packet combining. In addition, previously proposed AM algorithms are not optimal and can provide poor performance when packet combining is implemented. Herein, we first show
Experimental scheme and restoration algorithm of block compression sensing
Zhang, Linxia; Zhou, Qun; Ke, Jun
2018-01-01
Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.
Leng Shuai; Zhuang Tingliang; Nett, Brian E; Chen Guanghong
2005-01-01
In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2π angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2π angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm
A study of block algorithms for fermion matrix inversion
Henty, D.
1990-01-01
We compare the convergence properties of Lanczos and Conjugate Gradient algorithms applied to the calculation of columns of the inverse fermion matrix for Kogut-Susskind and Wilson fermions in lattice QCD. When several columns of the inverse are required simultaneously, a block version of the Lanczos algorithm is most efficient at small mass, being over 5 times faster than the single algorithms. The block algorithm is also less susceptible to critical slowing down. (orig.)
Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network
T. Panigrahi
2012-01-01
Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.
Zero-block mode decision algorithm for H.264/AVC.
Lee, Yu-Ming; Lin, Yinyi
2009-03-01
In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.
Wolfram C Poller
Full Text Available Optimization of the AV-interval (AVI in DDD pacemakers improves cardiac hemodynamics and reduces pacemaker syndromes. Manual optimization is typically not performed in clinical routine. In the present study we analyze the prevalence of E/A wave fusion and A wave truncation under resting conditions in 160 patients with complete AV block (AVB under the pre-programmed AVI. We manually optimized sub-optimal AVI.We analyzed 160 pacemaker patients with complete AVB, both in sinus rhythm (AV-sense; n = 129 and under atrial pacing (AV-pace; n = 31. Using Doppler analyses of the transmitral inflow we classified the nominal AVI as: a normal, b too long (E/A wave fusion or c too short (A wave truncation. In patients with a sub-optimal AVI, we performed manual optimization according to the recommendations of the American Society of Echocardiography.All AVB patients with atrial pacing exhibited a normal transmitral inflow under the nominal AV-pace intervals (100%. In contrast, 25 AVB patients in sinus rhythm showed E/A wave fusion under the pre-programmed AV-sense intervals (19.4%; 95% confidence interval (CI: 12.6-26.2%. A wave truncations were not observed in any patient. All patients with a complete E/A wave fusion achieved a normal transmitral inflow after AV-sense interval reduction (mean optimized AVI: 79.4 ± 13.6 ms.Given the rate of 19.4% (CI 12.6-26.2% of patients with a too long nominal AV-sense interval, automatic algorithms may prove useful in improving cardiac hemodynamics, especially in the subgroup of atrially triggered pacemaker patients with AV node diseases.
Aniba, Ghassane
2011-04-01
This paper presents an optimal adaptive modulation (AM) algorithm designed using a cross-layer approach which combines truncated automatic repeat request (ARQ) protocol and packet combining. Transmissions are performed over multiple-input multiple-output (MIMO) Nakagami fading channels, and retransmitted packets are not necessarily modulated using the same modulation format as in the initial transmission. Compared to traditional approach, cross-layer design based on the coupling across the physical and link layers, has proven to yield better performance in wireless communications. However, there is a lack for the performance analysis and evaluation of such design when the ARQ protocol is used in conjunction with packet combining. Indeed, previous works addressed the link layer performance of AM with truncated ARQ but without packet combining. In addition, previously proposed AM algorithms are not optimal and can provide poor performance when packet combining is implemented. Herein, we first show that the packet loss rate (PLR) resulting from the combining of packets modulated with different constellations can be well approximated by an exponential function. This model is then used in the design of an optimal AM algorithm for systems employing packet combining, truncated ARQ and MIMO antenna configurations, considering transmission over Nakagami fading channels. Numerical results are provided for operation with or without packet combining, and show the enhanced performance and efficiency of the proposed algorithm in comparison with existing ones. © 2011 IEEE.
Ship Block Transportation Scheduling Problem Based on Greedy Algorithm
Chong Wang
2016-05-01
Full Text Available Ship block transportation problems are crucial issues to address in reducing the construction cost and improving the productivity of shipyards. Shipyards aim to maximize the workload balance of transporters with time constraint such that all blocks should be transported during the planning horizon. This process leads to three types of penalty time: empty transporter travel time, delay time, and tardy time. This study aims to minimize the sum of the penalty time. First, this study presents the problem of ship block transportation with the generalization of the block transportation restriction on the multi-type transporter. Second, the problem is transformed into the classical traveling salesman problem and assignment problem through a reasonable model simplification and by adding a virtual node to the proposed directed graph. Then, a heuristic algorithm based on greedy algorithm is proposed to assign blocks to available transporters and sequencing blocks for each transporter simultaneously. Finally, the numerical experiment method is used to validate the model, and its result shows that the proposed algorithm is effective in realizing the efficient use of the transporters in shipyards. Numerical simulation results demonstrate the promising application of the proposed method to efficiently improve the utilization of transporters and to reduce the cost of ship block logistics for shipyards.
Nakajima, Teruyuki
2010-01-01
I explain the motivation behind our paper 'Algorithms for radiative intensity calculations in moderately thick atmospheres using a truncation approximation' (JQSRT 1988;40:51-69) and discuss our results in a broader historical context.
Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis
2007-07-01
This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
Algorithmic detectability threshold of the stochastic block model
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Projective block Lanczos algorithm for dense, Hermitian eigensystems
Webster, F.; Lo, G.C.
1996-01-01
Projection operators are used to effect open-quotes deflation by restrictionclose quotes and it is argued that this is an optimal Lanczos algorithm for memory minimization. Algorithmic optimization is constrained to dense, Hermitian eigensystems where a significant number of the extreme eigenvectors must be obtained reliably and completely. The defining constraints are operator algebra without a matrix representation and semi-orthogonalization without storage of Krylov vectors. other semi-orthogonalization strategies for Lanczos algorithms and conjugate gradient techniques are evaluated within these constraints. Large scale, sparse, complex numerical experiments are performed on clusters of magnetic dipoles, a quantum many-body system that is not block-diagonalizable. Plane-wave, density functional theory of beryllium clusters provides examples of dense complex eigensystems. Use of preconditioners and spectral transformations is evaluated in a preprocessor prior to a high accuracy self-consistent field calculation. 25 refs., 3 figs., 5 tabs
Image Blocking Encryption Algorithm Based on Laser Chaos Synchronization
Shu-Ying Wang
2016-01-01
Full Text Available In view of the digital image transmission security, based on laser chaos synchronization and Arnold cat map, a novel image encryption scheme is proposed. Based on pixel values of plain image a parameter is generated to influence the secret key. Sequences of the drive system and response system are pretreated by the same method and make image blocking encryption scheme for plain image. Finally, pixels position are scrambled by general Arnold transformation. In decryption process, the chaotic synchronization accuracy is fully considered and the relationship between the effect of synchronization and decryption is analyzed, which has characteristics of high precision, higher efficiency, simplicity, flexibility, and better controllability. The experimental results show that the encryption algorithm image has high security and good antijamming performance.
Martini, Enrica; Breinbjerg, Olav; Maci, Stefano
2008-01-01
A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna is...
Silva-Romo, Gilberto; Mendoza-Rosales, Claudia Cristina; Campos-Madrigal, Emiliano; Hernández-Marmolejo, Yoalli Bianii; de la Rosa-Mora, Orestes Antonio; de la Torre-González, Alam Israel; Bonifacio-Serralde, Carlos; López-García, Nallely; Nápoles-Valenzuela, Juan Ivan
2018-04-01
In the central sector of the Sierra Madre del Sur in Southern Mexico, between approximately 36 and 16 Ma ago and in the west to east direction, a diachronic process of the formation of ∼north-south trending fault-bounded basins occurred. No tectono-sedimentary event in the period between 25 and 20 Ma is recognized in the study region. A period during which subduction erosion truncated the continental crust of southern Mexico has been proposed. The chronology, geometry and style of the formation of the Eocene Miocene fault-bounded basins are more congruent with crustal truncation by the detachment of the Chortís block, thus bringing into question the crustal truncation hypothesis of the Southern Mexico margin. Between Taxco and Tehuacán, using seven new Laser Ablation- Inductively-coupled plasma mass spectrometry (LA-ICP-MS) U-Pb ages in magmatic zircons, we refine the stratigraphy of the Tepenene, Tehuitzingo, Atzumba and Tepelmeme basins. The analyzed basins present similar tectono-sedimentary evolutions as follows: Stage 1, depocenter formation and filling by clastic rocks accumulated as alluvial fans and Stage 2, lacustrine sedimentation characterized by calcareous and/or evaporite beds. Based on our results, we propose the following hypothesis: in Southern Mexico, during Eocene-Miocene times, the diachronic formation of fault-bounded basins with general north-south trend occurred within the framework of the convergence between the plates of North and South America, and once the Chortís block had slipped towards the east, the basins formed in the cortical crust were recently left behind. On the other hand, the beginning of the basins' formation process related to left strike slip faults during Eocene-Oligocene times can be associated with the thermomechanical maturation cortical process that caused the brittle/ductile transition level in the continental crust to shallow.
The Combination of RSA And Block Chiper Algorithms To Maintain Message Authentication
Yanti Tarigan, Sepri; Sartika Ginting, Dewi; Lumban Gaol, Melva; Lorensi Sitompul, Kristin
2017-12-01
RSA algorithm is public key algorithm using prime number and even still used today. The strength of this algorithm lies in the exponential process, and the factorial number into 2 prime numbers which until now difficult to do factoring. The RSA scheme itself adopts the block cipher scheme, where prior to encryption, the existing plaintext is divide in several block of the same length, where the plaintext and ciphertext are integers between 1 to n, where n is typically 1024 bit, and the block length itself is smaller or equal to log(n)+1 with base 2. With the combination of RSA algorithm and block chiper it is expected that the authentication of plaintext is secure. The secured message will be encrypted with RSA algorithm first and will be encrypted again using block chiper. And conversely, the chipertext will be decrypted with the block chiper first and decrypted again with the RSA algorithm. This paper suggests a combination of RSA algorithms and block chiper to secure data.
A. AL-Salhi, Yahya E.; Lu, Songfeng
2016-08-01
Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.
A New Block Processing Algorithm of LLL for Fast High-dimension Ambiguity Resolution
LIU Wanke
2016-02-01
Full Text Available Due to high dimension and precision for the ambiguity vector under GNSS observations of multi-frequency and multi-system, a major problem to limit computational efficiency of ambiguity resolution is the longer reduction time when using conventional LLL algorithm. To address this problem, it is proposed a new block processing algorithm of LLL by analyzing the relationship between the reduction time and the dimensions and precision of ambiguity. The new algorithm reduces the reduction time to improve computational efficiency of ambiguity resolution, which is based on block processing ambiguity variance-covariance matrix that decreased the dimensions of single reduction matrix. It is validated that the new algorithm with two groups of measured data. The results show that the computing efficiency of the new algorithm increased by 65.2% and 60.2% respectively compared with that of LLL algorithm when choosing a reasonable number of blocks.
Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms
Ahmed Azouaoui
2012-01-01
Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.
Applications of Fast Truncated Multiplication in Cryptography
Laszlo Hars
2006-12-01
Full Text Available Truncated multiplications compute truncated products, contiguous subsequences of the digits of integer products. For an n-digit multiplication algorithm of time complexity O(nÃŽÂ±, with 1<ÃŽÂ±Ã¢Â‰Â¤2, there is a truncated multiplication algorithm, which is constant times faster when computing a short enough truncated product. Applying these fast truncated multiplications, several cryptographic long integer arithmetic algorithms are improved, including integer reciprocals, divisions, Barrett and Montgomery multiplications, 2n-digit modular multiplication on hardware for n-digit half products. For example, Montgomery multiplication is performed in 2.6 Karatsuba multiplication time.
Data Back-Up and Recovery Techniques for Cloud Server Using Seed Block Algorithm
R. V. Gandhi; M Seshaiah
2015-01-01
In cloud computing, data generated in electronic form are large in amount. To maintain this data efficiently, there is a necessity of data recovery services. To cater this, we propose a smart remote data backup algorithm, Seed Block Algorithm. The objective of proposed algorithm is twofold; first it help the users to collect information from any remote location in the absence of network connectivity and second to recover the files in case of the file deletion or if the cloud gets ...
Du, Mao-Kang; He, Bo; Wang, Yong
2011-01-01
Recently, the cryptosystem based on chaos has attracted much attention. Wang and Yu (Commun. Nonlin. Sci. Numer. Simulat. 14 (2009) 574) proposed a block encryption algorithm based on dynamic sequences of multiple chaotic systems. We analyze the potential flaws in the algorithm. Then, a chosen-plaintext attack is presented. Some remedial measures are suggested to avoid the flaws effectively. Furthermore, an improved encryption algorithm is proposed to resist the attacks and to keep all the merits of the original cryptosystem.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and
Cazade, Pierre-André; Berezovska, Ganna; Meuwly, Markus, E-mail: m.meuwly@unibas.ch [Department of Chemistry, University of Basel, Klingelbergstrasse 80, CH-4056 Basel (Switzerland); Zheng, Wenwei; Clementi, Cecilia [Department of Chemistry, Rice University, 6100 Main St., Houston, Texas 77005 (United States); Prada-Gracia, Diego; Rao, Francesco [School of Soft Matter Research, Freiburg Institute for Advanced Studies, Albertstrasse 19, 79104 Freiburg im Breisgau (Germany)
2015-01-14
The ligand migration network for O{sub 2}–diffusion in truncated Hemoglobin N is analyzed based on three different clustering schemes. For coordinate-based clustering, the conventional k–means and the kinetics-based Markov Clustering (MCL) methods are employed, whereas the locally scaled diffusion map (LSDMap) method is a collective-variable-based approach. It is found that all three methods agree well in their geometrical definition of the most important docking site, and all experimentally known docking sites are recovered by all three methods. Also, for most of the states, their population coincides quite favourably, whereas the kinetics of and between the states differs. One of the major differences between k–means and MCL clustering on the one hand and LSDMap on the other is that the latter finds one large primary cluster containing the Xe1a, IS1, and ENT states. This is related to the fact that the motion within the state occurs on similar time scales, whereas structurally the state is found to be quite diverse. In agreement with previous explicit atomistic simulations, the Xe3 pocket is found to be a highly dynamical site which points to its potential role as a hub in the network. This is also highlighted in the fact that LSDMap cannot identify this state. First passage time distributions from MCL clusterings using a one- (ligand-position) and two-dimensional (ligand-position and protein-structure) descriptor suggest that ligand- and protein-motions are coupled. The benefits and drawbacks of the three methods are discussed in a comparative fashion and highlight that depending on the questions at hand the best-performing method for a particular data set may differ.
Truncated Groebner fans and lattice ideals
Lauritzen, Niels
2005-01-01
We outline a generalization of the Groebner fan of a homogeneous ideal with maximal cells parametrizing truncated Groebner bases. This "truncated" Groebner fan is usually much smaller than the full Groebner fan and offers the natural framework for conversion between truncated Groebner bases. The generic Groebner walk generalizes naturally to this setting by using the Buchberger algorithm with truncation on facets. We specialize to the setting of lattice ideals. Here facets along the generic w...
Foroughi Pour, Ali; Dalton, Lori A
2018-03-21
Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.
An efficient algorithm for removal of inactive blocks in reservoir simulation
Abou-Kassem, J.H.; Ertekin, T. (Pennsylvania State Univ., PA (United States))
1992-02-01
In the efficient simulation of reservoirs having irregular boundaries one is confronted with two problems: the removal of inactive blocks at the matrix level and the development and application of a variable band-width solver. A simple algorithm is presented that provides effective solutions to these two problems. The algorithm is demonstrated for both the natural ordering and D4 ordering schemes. It can be easily incorporated in existing simulators and results in significant savings in CPU and matrix storage requirements. The removal of the inactive blocks at the matrix level plays a major role in effecting these savings whereas the application of a variable band-width solver plays an enhancing role only. The value of this algorithm lies in the fact that it takes advantage of irregular reservoir boundaries that are invariably encountered in almost all practical applications of reservoir simulation. 11 refs., 3 figs., 3 tabs.
A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation
Li, Yue-e.; Wang, Qiang
2011-11-01
This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.
A fast image encryption algorithm based on only blocks in cipher text
Wang, Xing-Yuan; Wang, Qian
2014-03-01
In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks.
A fast image encryption algorithm based on only blocks in cipher text
Wang Xing-Yuan; Wang Qian
2014-01-01
In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks
Video error concealment using block matching and frequency selective extrapolation algorithms
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
A block matching-based registration algorithm for localization of locally advanced lung tumors
Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D., E-mail: gdhugo@vcu.edu [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia, 23298 (United States)
2014-04-15
Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm{sup 3}), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0
A block matching-based registration algorithm for localization of locally advanced lung tumors
Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.
2014-01-01
Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm 3 ), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0.001). Left
Apurva Samdurkar
2018-06-01
Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.
An enhanced block matching algorithm for fast elastic registration in adaptive radiotherapy
Malsch, U; Thieke, C; Huber, P E; Bendl, R
2006-01-01
Image registration has many medical applications in diagnosis, therapy planning and therapy. Especially for time-adaptive radiotherapy, an efficient and accurate elastic registration of images acquired for treatment planning, and at the time of the actual treatment, is highly desirable. Therefore, we developed a fully automatic and fast block matching algorithm which identifies a set of anatomical landmarks in a 3D CT dataset and relocates them in another CT dataset by maximization of local correlation coefficients in the frequency domain. To transform the complete dataset, a smooth interpolation between the landmarks is calculated by modified thin-plate splines with local impact. The concept of the algorithm allows separate processing of image discontinuities like temporally changing air cavities in the intestinal track or rectum. The result is a fully transformed 3D planning dataset (planning CT as well as delineations of tumour and organs at risk) to a verification CT, allowing evaluation and, if necessary, changes of the treatment plan based on the current patient anatomy without time-consuming manual re-contouring. Typically the total calculation time is less than 5 min, which allows the use of the registration tool between acquiring the verification images and delivering the dose fraction for online corrections. We present verifications of the algorithm for five different patient datasets with different tumour locations (prostate, paraspinal and head-and-neck) by comparing the results with manually selected landmarks, visual assessment and consistency testing. It turns out that the mean error of the registration is better than the voxel resolution (2 x 2 x 3 mm 3 ). In conclusion, we present an algorithm for fully automatic elastic image registration that is precise and fast enough for online corrections in an adaptive fractionated radiation treatment course
Study and optimization of positioning algorithms for monolithic PET detectors blocks
Acilu, P Garcia de; Sarasola, I; Canadas, M; Cuerdo, R; Mendes, P Rato; Romero, L; Willmott, C
2012-01-01
We are developing a PET insert for existing MRI equipment to be used in clinical PET/MR studies of the human brain. The proposed scanner is based on annihilation gamma detection with monolithic blocks of cerium-doped lutetium yttrium orthosilicate (LYSO:Ce) coupled to magnetically-compatible avalanche photodiodes (APD) matrices. The light distribution generated on the LYSO:Ce block provides the impinging position of the 511 keV photons by means of a positioning algorithm. Several positioning methods, from the simplest Anger Logic to more sophisticate supervised-learning Neural Networks (NN), can be implemented to extract the incidence position of gammas directly from the APD signals. Finally, an optimal method based on a two-step Feed-Forward Neural Network has been selected. It allows us to reach a resolution at detector level of 2 mm, and acquire images of point sources using a first BrainPET prototype consisting of two monolithic blocks working in coincidence. Neural networks provide a straightforward positioning of the acquired data once they have been trained, however the training process is usually time-consuming. In order to obtain an efficient positioning method for the complete scanner it was necessary to find a training procedure that reduces the data acquisition and processing time without introducing a noticeable degradation of the spatial resolution. A grouping process and posterior selection of the training data have been done regarding the similitude of the light distribution of events which have one common incident coordinate (transversal or longitudinal). By doing this, the amount of training data can be reduced to about 5% of the initial number with a degradation of spatial resolution lower than 10%.
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2018-05-01
This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.
Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.
2011-01-01
Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.
Truncation correction for oblique filtering lines
Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Guenter; Dennerlein, Frank; Noo, Frederic
2008-01-01
State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.
Lin, Ying Chih; Lu, Chin Lung; Chang, Hwan-You; Tang, Chuan Yi
2005-01-01
In the study of genome rearrangement, the block-interchanges have been proposed recently as a new kind of global rearrangement events affecting a genome by swapping two nonintersecting segments of any length. The so-called block-interchange distance problem, which is equivalent to the sorting-by-block-interchange problem, is to find a minimum series of block-interchanges for transforming one chromosome into another. In this paper, we study this problem by considering the circular chromosomes and propose a Omicron(deltan) time algorithm for solving it by making use of permutation groups in algebra, where n is the length of the circular chromosome and delta is the minimum number of block-interchanges required for the transformation, which can be calculated in Omicron(n) time in advance. Moreover, we obtain analogous results by extending our algorithm to linear chromosomes. Finally, we have implemented our algorithm and applied it to the circular genomic sequences of three human vibrio pathogens for predicting their evolutionary relationships. Consequently, our experimental results coincide with the previous ones obtained by others using a different comparative genomics approach, which implies that the block-interchange events seem to play a significant role in the evolution of vibrio species.
Yu, Shuzhi; Hao, Fanchang; Leong, Hon Wai
2016-02-01
We consider the problem of sorting signed permutations by reversals, transpositions, transreversals, and block-interchanges. The problem arises in the study of species evolution via large-scale genome rearrangement operations. Recently, Hao et al. gave a 2-approximation scheme called genome sorting by bridges (GSB) for solving this problem. Their result extended and unified the results of (i) He and Chen - a 2-approximation algorithm allowing reversals, transpositions, and block-interchanges (by also allowing transversals) and (ii) Hartman and Sharan - a 1.5-approximation algorithm allowing reversals, transpositions, and transversals (by also allowing block-interchanges). The GSB result is based on introduction of three bridge structures in the breakpoint graph, the L-bridge, T-bridge, and X-bridge that models goodreversal, transposition/transreversal, and block-interchange, respectively. However, the paper by Hao et al. focused on proving the 2-approximation GSB scheme and only mention a straightforward [Formula: see text] algorithm. In this paper, we give an [Formula: see text] algorithm for implementing the GSB scheme. The key idea behind our faster GSB algorithm is to represent cycles in the breakpoint graph by their canonical sequences, which greatly simplifies the search for these bridge structures. We also give some comparison results (running time and computed distances) against the original GSB implementation.
Balanced truncation for linear switched systems
Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef
2013-01-01
In this paper, we present a theoretical analysis of the model reduction algorithm for linear switched systems from Shaker and Wisniewski (2011, 2009) and . This algorithm is a reminiscence of the balanced truncation method for linear parameter varying systems (Wood et al., 1996) [3]. Specifically...
R Programs for Truncated Distributions
Saralees Nadarajah
2006-08-01
Full Text Available Truncated distributions arise naturally in many practical situations. In this note, we provide programs for computing six quantities of interest (probability density function, mean, variance, cumulative distribution function, quantile function and random numbers for any truncated distribution: whether it is left truncated, right truncated or doubly truncated. The programs are written in R: a freely downloadable statistical software.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
A scalable community detection algorithm for large graphs using stochastic block models
Peng, Chengbin
2017-11-24
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of
A scalable community detection algorithm for large graphs using stochastic block models
Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.
2017-01-01
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of
ALGORITHMIC FACILITIES AND SOFTWARE FOR VIRTUAL DESIGN OF ANTI-BLOCK AND COUNTER-SLIPPING SYSTEMS
N. N. Hurski
2009-01-01
Full Text Available The paper considers algorithms of designing a roadway covering for virtual test of mobile machine movement dynamics; an algorithm of forming actual values of forces/moments in «road–wheel–car» contact and their derivatives, and also a software for virtual designing of mobile machine dynamics.
Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui
2016-01-01
The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the
Shepard, A; Bednarz, B [University of Wisconsin, Madison, WI (United States)
2016-06-15
Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localized block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA
Shepard, A; Bednarz, B
2016-01-01
Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localized block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA
Yu-Fei Gao
2017-04-01
Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.
F. F. Ngwane
2015-01-01
Full Text Available We propose a block hybrid trigonometrically fitted (BHT method, whose coefficients are functions of the frequency and the step-size for directly solving general second-order initial value problems (IVPs, including systems arising from the semidiscretization of hyperbolic Partial Differential Equations (PDEs, such as the Telegraph equation. The BHT is formulated from eight discrete hybrid formulas which are provided by a continuous two-step hybrid trigonometrically fitted method with two off-grid points. The BHT is implemented in a block-by-block fashion; in this way, the method does not suffer from the disadvantages of requiring starting values and predictors which are inherent in predictor-corrector methods. The stability property of the BHT is discussed and the performance of the method is demonstrated on some numerical examples to show accuracy and efficiency advantages.
Yeh, Chun-Ting; Brunette, T J; Baker, David; McIntosh-Smith, Simon; Parmeggiani, Fabio
2018-02-01
Computational protein design methods have enabled the design of novel protein structures, but they are often still limited to small proteins and symmetric systems. To expand the size of designable proteins while controlling the overall structure, we developed Elfin, a genetic algorithm for the design of novel proteins with custom shapes using structural building blocks derived from experimentally verified repeat proteins. By combining building blocks with compatible interfaces, it is possible to rapidly build non-symmetric large structures (>1000 amino acids) that match three-dimensional geometric descriptions provided by the user. A run time of about 20min on a laptop computer for a 3000 amino acid structure makes Elfin accessible to users with limited computational resources. Protein structures with controlled geometry will allow the systematic study of the effect of spatial arrangement of enzymes and signaling molecules, and provide new scaffolds for functional nanomaterials. Copyright © 2017 Elsevier Inc. All rights reserved.
Approximate truncation robust computed tomography—ATRACT
Dennerlein, Frank; Maier, Andreas
2013-01-01
We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)
Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G
2011-07-01
In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.
Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences
Guo Bao-long
2004-09-01
Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.
Cryptanalysis on an image block encryption algorithm based on spatiotemporal chaos
Wang Xing-Yuan; He Guo-Xiang
2012-01-01
An image block encryption scheme based on spatiotemporal chaos has been proposed recently. In this paper, we analyse the security weakness of the proposal. The main problem of the original scheme is that the generated keystream remains unchanged for encrypting every image. Based on the flaws, we demonstrate a chosen plaintext attack for revealing the equivalent keys with only 6 pairs of plaintext/ciphertext used. Finally, experimental results show the validity of our attack. (general)
Blocked All-Pairs Shortest Paths Algorithm on Intel Xeon Phi KNL Processor: A Case Study
Rucci, Enzo; De Giusti, Armando Eduardo; Naiouf, Marcelo
2017-01-01
Manycores are consolidating in HPC community as a way of improving performance while keeping power efficiency. Knights Landing is the recently released second generation of Intel Xeon Phi architec- ture.While optimizing applications on CPUs, GPUs and first Xeon Phi’s has been largely studied in the last years, the new features in Knights Landing processors require the revision of programming and optimization techniques for these devices. In this work, we selected the Floyd-Warshall algorithm ...
FBCOT: a fast block coding option for JPEG 2000
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically Coding with Optimized Truncation).
Lalush, D.S.; Tsui, B.M.W.; Karimi, S.S.
1996-01-01
We evaluate fast reconstruction algorithms including ordered subsets-EM (OS-EM) and Rescaled Block Iterative EM (RBI-EM) in fully 3D SPECT applications on the basis of their convergence and resolution recovery properties as iterations proceed. Using a 3D computer-simulated phantom consisting of 3D Gaussian objects, we simulated projection data that includes only the effects of sampling and detector response of a parallel-hole collimator. Reconstructions were performed using each of the three algorithms (ML-EM, OS-EM, and RBI-EM) modeling the 3D detector response in the projection function. Resolution recovery was evaluated by fitting Gaussians to each of the four objects in the iterated image estimates at selected intervals. Results show that OS-EM and RBI-EM behave identically in this case; their resolution recovery results are virtually indistinguishable. Their resolution behavior appears to be very similar to that of ML-EM, but accelerated by a factor of twenty. For all three algorithms, smaller objects take more iterations to converge. Next, we consider the effect noise has on convergence. For both noise-free and noisy data, we evaluate the log likelihood function at each subiteration of OS-EM and RBI-EM, and at each iteration of ML-EM. With noisy data, both OS-EM and RBI-EM give results for which the log-likelihood function oscillates. Especially for 180-degree acquisitions, RBI-EM oscillates less than OS-EM. Both OS-EM and RBI-EM appear to converge to solutions, but not to the ML solution. We conclude that both OS-EM and RBI-EM can be effective algorithms for fully 3D SPECT reconstruction. Both recover resolution similarly to ML-EM, only more quickly
Lalush, D. S.; Tsui, B. M. W.
1998-06-01
We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Radial motion of the carotid artery wall: A block matching algorithm approach
Effat Soleimani
2012-06-01
Full Text Available Introduction: During recent years, evaluating the relation between mechanical properties of the arterialwall and cardiovascular diseases has been of great importance. On the other hand, motion estimation of thearterial wall using a sequence of noninvasive ultrasonic images and convenient processing methods mightprovide useful information related to biomechanical indexes and elastic properties of the arteries and assistdoctors to discriminate between healthy and diseased arteries. In the present study, a block matching basedalgorithm was introduced to extract radial motion of the carotid artery wall during cardiac cycles.Materials and Methods: The program was implemented to the consecutive ultrasonic images of thecommon carotid artery of 10 healthy men and maximum and mean radial movement of the posterior wall ofthe artery was extracted. Manual measurements were carried out to validate the automatic method andresults of two methods were compared.Results: Paired t-test analysis showed no significant differences between the automatic and manualmethods (P>0.05. There was significant correlation between the changes in the instantaneous radialmovement of the common carotid artery measured with the manual and automatic methods (withcorrelation coefficient 0.935 and P<0.05.Conclusion: Results of the present study showed that by using a semi automated computer analysismethod, with minimizing the user interfere and no attention to the user experience or skill, arterial wallmotion in the radial direction can be extracted from consecutive ultrasonic frames
Ummuhan Basaran Filik
2016-01-01
Full Text Available A new hybrid wind speed prediction approach, which uses fast block least mean square (FBLMS algorithm and artificial neural network (ANN method, is proposed. FBLMS is an adaptive algorithm which has reduced complexity with a very fast convergence rate. A hybrid approach is proposed which uses two powerful methods: FBLMS and ANN method. In order to show the efficiency and accuracy of the proposed approach, seven-year real hourly collected wind speed data sets belonging to Turkish State Meteorological Service of Bozcaada and Eskisehir regions are used. Two different ANN structures are used to compare with this approach. The first six-year data is handled as a train set; the remaining one-year hourly data is handled as test data. Mean absolute error (MAE and root mean square error (RMSE are used for performance evaluations. It is shown for various cases that the performance of the new hybrid approach gives better results than the different conventional ANN structure.
Properties of truncated multiplicity distributions
Lupia, S.
1995-01-01
Truncation effects on multiplicity distributions are discussed. Observables sensitive to the tail, like factorial moments, factorial cumulants and their ratio, are shown to be strongly affected by truncation. A possible way to overcome this problem by looking at the head of the distribution is suggested. (author)
Properties of truncated multiplicity distributions
Lupia, S. [Turin Univ. (Italy). Dipt. di Fisica
1995-12-31
Truncation effects on multiplicity distributions are discussed. Observables sensitive to the tail, like factorial moments, factorial cumulants and their ratio, are shown to be strongly affected by truncation. A possible way to overcome this problem by looking at the head of the distribution is suggested. (author)
Mixtures of truncated basis functions
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2012-01-01
In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for representing general hybrid Bayesian networks. The proposed framework generalizes both the mixture of truncated exponentials (MTEs) framework and the mixture of polynomials (MoPs) framework. Similar t...
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Simon, Vitor Hugo
1997-12-01
The goal of this work was a development of an algorithm for the Truncated Plurigaussian Stochastic Simulation and its validation in a complex geologic model. The reservoir data comes from Aux Vases Zone at Rural Hill Field in Illinois, USA, and from the 2D geological interpretation, described by WEIMER et al. (1982), three sets of samples, with different grid densities ware taken. These sets were used to condition the simulation and to refine the estimates of the non-stationary matrix of facies proportions, used to truncate the gaussian random functions (RF). The Truncated Plurigaussian Model is an extension of the Truncated Gaussian Model (TG). In this new model its possible to use several facies with different spatial structures, associated with the simplicity of TG. The geological interpretation, used as a validation model, was chosen because it shows a set of NW/SE elongated tidal channels cutting the NE/SW shoreline deposits interleaved by impermeable facies. These characteristics of spatial structures of sedimentary facies served to evaluate the simulation model. Two independent gaussian RF were used, as well as an 'erosive model' as the truncation strategy. Also, non-conditional simulations were proceeded, using linearly combined gaussian RF with varying correlation coefficients. It was analyzed the influence of some parameters like: number of gaussian RF,correlation coefficient, truncations strategy, in the outcome of simulation, and also the physical meaning of these parameters under a geological point of view. It was showed, step by step, using an example, the theoretical model, and how to construct an algorithm to simulate with the Truncated Plurigaussian Model. The conclusion of this work was that even with a plain algorithm of the Conditional Truncated Plurigaussian and a complex geological model it's possible to obtain a usefulness product. (author)
A Post-Truncation Parameterization of Truncated Normal Technical Inefficiency
Christine Amsler; Peter Schmidt; Wen-Jen Tsay
2013-01-01
In this paper we consider a stochastic frontier model in which the distribution of technical inefficiency is truncated normal. In standard notation, technical inefficiency u is distributed as N^+ (μ,σ^2). This distribution is affected by some environmental variables z that may or may not affect the level of the frontier but that do affect the shortfall of output from the frontier. We will distinguish the pre-truncation mean (μ) and variance (σ^2) from the post-truncation mean μ_*=E(u) and var...
Mei-Shiang Chang
2013-01-01
Full Text Available The facility layout problem is a typical combinational optimization problem. In this research, a slicing tree representation and a quadratically constrained program model are combined with harmony search to develop a heuristic method for solving the unequal-area block layout problem. Because of characteristics of slicing tree structure, we propose a regional structure of harmony memory to memorize facility layout solutions and two kinds of harmony improvisation to enhance global search ability of the proposed heuristic method. The proposed harmony search based heuristic is tested on 10 well-known unequal-area facility layout problems from the literature. The results are compared with the previously best-known solutions obtained by genetic algorithm, tabu search, and ant system as well as exact methods. For problems O7, O9, vC10Ra, M11*, and Nug12, new best solutions are found. For other problems, the proposed approach can find solutions that are very similar to previous best-known solutions.
Analysis of Block OMP using Block RIP
Wang, Jun; Li, Gang; Zhang, Hao; Wang, Xiqin
2011-01-01
Orthogonal matching pursuit (OMP) is a canonical greedy algorithm for sparse signal reconstruction. When the signal of interest is block sparse, i.e., it has nonzero coefficients occurring in clusters, the block version of OMP algorithm (i.e., Block OMP) outperforms the conventional OMP. In this paper, we demonstrate that a new notion of block restricted isometry property (Block RIP), which is less stringent than standard restricted isometry property (RIP), can be used for a very straightforw...
How Truncating Are 'Truncating Languages'? Evidence from Russian and German.
Rathcke, Tamara V
Russian and German have pr eviously been described as 'truncating', or cutting off target frequencies of the phrase-final pitch trajectories when the time available for voicing is compromised. However, supporting evidence is rare and limited to only a few pitch categories. This paper reports a production study conducted to document pitch adjustments to linguistic materials, in which the amount of voicing available for the realization of a pitch pattern varies from relatively long to extremely short. Productions of nuclear H+L*, H* and L*+H pitch accents followed by a low boundary tone were investigated in the two languages. The results of the study show that speakers of both 'truncating languages' do not utilize truncation exclusively when accommodating to different segmental environments. On the contrary, they employ several strategies - among them is truncation but also compression and temporal re-alignment - to produce the target pitch categories under increasing time pressure. Given that speakers can systematically apply all three adjustment strategies to produce some pitch patterns (H* L% in German and Russian) while not using truncation in others (H+L* L% particularly in Russian), we question the effectiveness of the typological classification of these two languages as 'truncating'. Moreover, the phonetic detail of truncation varies considerably, both across and within the two languages, indicating that truncation cannot be easily modeled as a unified phenomenon. The results further suggest that the phrase-final pitch adjustments are sensitive to the phonological composition of the tonal string and the status of a particular tonal event (associated vs. boundary tone), and do not apply to falling vs. rising pitch contours across the board, as previously put forward for German. Implications for the intonational phonology and prosodic typology are addressed in the discussion. © 2017 S. Karger AG, Basel.
Pongpan Nakkaew
2016-06-01
Full Text Available In manufacturing process where efficiency is crucial in order to remain competitive, flowshop is a common configuration in which machines are arranged in series and products are produced through the stages one by one. In certain production processes, the machines are frequently configured in the way that each production stage may contain multiple processing units in parallel or hybrid. Moreover, along with precedent conditions, the sequence dependent setup times may exist. Finally, in case there is no buffer, a machine is said to be blocked if the next stage to handle its output is being occupied. Such NP-Hard problem, referred as Blocking Hybrid Flowshop Scheduling Problem with Sequence Dependent Setup/Changeover Times, is usually not possible to find the best exact solution to satisfy optimization objectives such as minimization of the overall production time. Thus, it is usually solved by approximate algorithms such as metaheuristics. In this paper, we investigate comparatively the effectiveness of the two approaches: a Genetic Algorithm (GA and an Artificial Bee Colony (ABC algorithm. GA is inspired by the process of natural selection. ABC, in the same manner, resembles the way types of bees perform specific functions and work collectively to find their foods by means of division of labor. Additionally, we apply an algorithm to improve the GA and ABC algorithms so that they can take advantage of parallel processing resources of modern multiple core processors while eliminate the need for screening the optimal parameters of both algorithms in advance.
Modified BTC Algorithm for Audio Signal Coding
TOMIC, S.
2016-11-01
Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.
New results to BDD truncation method for efficient top event probability calculation
Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang
2012-01-01
A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.
Ülker, Erkan; Turanboy, Alparslan
2009-07-01
The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2010-01-01
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...
Truncated Calogero-Sutherland models
Pittman, S. M.; Beau, M.; Olshanii, M.; del Campo, A.
2017-05-01
A one-dimensional quantum many-body system consisting of particles confined in a harmonic potential and subject to finite-range two-body and three-body inverse-square interactions is introduced. The range of the interactions is set by truncation beyond a number of neighbors and can be tuned to interpolate between the Calogero-Sutherland model and a system with nearest and next-nearest neighbors interactions discussed by Jain and Khare. The model also includes the Tonks-Girardeau gas describing impenetrable bosons as well as an extension with truncated interactions. While the ground state wave function takes a truncated Bijl-Jastrow form, collective modes of the system are found in terms of multivariable symmetric polynomials. We numerically compute the density profile, one-body reduced density matrix, and momentum distribution of the ground state as a function of the range r and the interaction strength.
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Angular truncation errors in integrating nephelometry
Moosmueller, Hans; Arnott, W. Patrick
2003-01-01
Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
A Residual Approach for Balanced Truncation Model Reduction (BTMR of Compartmental Systems
William La Cruz
2014-05-01
Full Text Available This paper presents a residual approach of the square root balanced truncation algorithm for model order reduction of continuous, linear and time-invariante compartmental systems. Specifically, the new approach uses a residual method to approximate the controllability and observability gramians, whose resolution is an essential step of the square root balanced truncation algorithm, that requires a great computational cost. Numerical experiences are included to highlight the efficacy of the proposed approach.
Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu
2016-12-01
Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single
Truncated States Obtained by Iteration
Cardoso, W. B.; Almeida, N. G. de
2008-01-01
We introduce the concept of truncated states obtained via iterative processes (TSI) and study its statistical features, making an analogy with dynamical systems theory (DST). As a specific example, we have studied TSI for the doubling and the logistic functions, which are standard functions in studying chaos. TSI for both the doubling and logistic functions exhibit certain similar patterns when their statistical features are compared from the point of view of DST
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
An compression algorithm for medical images and a display with the decoding function
Gotoh, Toshiyuki; Nakagawa, Yukihiro; Shiohara, Morito; Yoshida, Masumi
1990-01-01
This paper describes and efficient image compression method for medical images, a high-speed display with the decoding function. In our method, an input image is divided into blocks, and either of Discrete Cosine Transform coding (DCT) or Block Truncation Coding (BTC) is adaptively applied on each block to improve image quality. The display, we developed, receives the compressed data from the host computer and reconstruct images of good quality at high speed using four decoding microprocessors on which our algorithm is implemented in pipeline. By the experiments, our method and display were verified to be effective. (author)
Zero-truncated negative binomial - Erlang distribution
Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana
2017-11-01
The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.
Dual scan CT image recovery from truncated projections
Sarkar, Shubhabrata; Wahi, Pankaj; Munshi, Prabhat
2017-12-01
There are computerized tomography (CT) scanners available commercially for imaging small objects and they are often categorized as mini-CT X-ray machines. One major limitation of these machines is their inability to scan large objects with good image quality because of the truncation of projection data. An algorithm is proposed in this work which enables such machines to scan large objects while maintaining the quality of the recovered image.
Laurent, C.; Chassery, J.M.; Peyrin, F.; Girerd, C.
1996-01-01
This paper deals with the parallel implementations of reconstruction methods in 3D tomography. 3D tomography requires voluminous data and long computation times. Parallel computing, on MIMD computers, seems to be a good approach to manage this problem. In this study, we present the different steps of the parallelization on an abstract parallel computer. Depending on the method, we use two main approaches to parallelize the algorithms: the local approach and the global approach. Experimental results on MIMD computers are presented. Two 3D images reconstructed from realistic data are showed
Manglos, S.H.
1992-01-01
Transverse image truncation can be a serious problem for human imaging using cone-beam transmission CT (CB-CT) implemented on a conventional rotating gamma camera. This paper presents a reconstruction method to reduce or eliminate the artifacts resulting from the truncation. The method uses a previously published transmission maximum likelihood EM algorithm, adapted to the cone-beam geometry. The reconstruction method is evaluated qualitatively using three human subjects of various dimensions and various degrees of truncation. (author)
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Statistical estimation for truncated exponential families
Akahira, Masafumi
2017-01-01
This book presents new findings on nonregular statistical estimation. Unlike other books on this topic, its major emphasis is on helping readers understand the meaning and implications of both regularity and irregularity through a certain family of distributions. In particular, it focuses on a truncated exponential family of distributions with a natural parameter and truncation parameter as a typical nonregular family. This focus includes the (truncated) Pareto distribution, which is widely used in various fields such as finance, physics, hydrology, geology, astronomy, and other disciplines. The family is essential in that it links both regular and nonregular distributions, as it becomes a regular exponential family if the truncation parameter is known. The emphasis is on presenting new results on the maximum likelihood estimation of a natural parameter or truncation parameter if one of them is a nuisance parameter. In order to obtain more information on the truncation, the Bayesian approach is also considere...
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Anas Altaleb
2017-03-01
Full Text Available The aim of this work is to synthesize 8*8 substitution boxes (S-boxes for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28 on Galois field GF(28. In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.
Lamp with a truncated reflector cup
Li, Ming; Allen, Steven C.; Bazydola, Sarah; Ghiu, Camil-Daniel
2013-10-15
A lamp assembly, and method for making same. The lamp assembly includes first and second truncated reflector cups. The lamp assembly also includes at least one base plate disposed between the first and second truncated reflector cups, and a light engine disposed on a top surface of the at least one base plate. The light engine is configured to emit light to be reflected by one of the first and second truncated reflector cups.
S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr
2014-03-01
An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.
Kyriacou S; Kontoleontos E; Weissenberger S; Mangani L; Casartelli E; Skouteropoulou I; Gattringer M; Gehrer A; Buchmayr M
2014-01-01
An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure
Computing correct truncated excited state wavefunctions
Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.
2016-12-01
We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.
Perspective on rainbow-ladder truncation
Eichmann, G.; Alkofer, R.; Krassnigg, A.; Cloeet, I. C.; Roberts, C. D.
2008-01-01
Prima facie the systematic implementation of corrections to the rainbow-ladder truncation of QCD's Dyson-Schwinger equations will uniformly reduce in magnitude those calculated mass-dimensioned results for pseudoscalar and vector meson properties that are not tightly constrained by symmetries. The aim and interpretation of studies employing rainbow-ladder truncation are reconsidered in this light
Stability of Slopes Reinforced with Truncated Piles
Shu-Wei Sun
2016-01-01
Full Text Available Piles are extensively used as a means of slope stabilization. A novel engineering technique of truncated piles that are unlike traditional piles is introduced in this paper. A simplified numerical method is proposed to analyze the stability of slopes stabilized with truncated piles based on the shear strength reduction method. The influential factors, which include pile diameter, pile spacing, depth of truncation, and existence of a weak layer, are systematically investigated from a practical point of view. The results show that an optimum ratio exists between the depth of truncation and the pile length above a slip surface, below which truncating behavior has no influence on the piled slope stability. This optimum ratio is bigger for slopes stabilized with more flexible piles and piles with larger spacing. Besides, truncated piles are more suitable for slopes with a thin weak layer than homogenous slopes. In practical engineering, the piles could be truncated reasonably while ensuring the reinforcement effect. The truncated part of piles can be filled with the surrounding soil and compacted to reduce costs by using fewer materials.
Truncatable bootstrap equations in algebraic form and critical surface exponents
Gliozzi, Ferdinando [Dipartimento di Fisica, Università di Torino andIstituto Nazionale di Fisica Nucleare - sezione di Torino,Via P. Giuria 1, Torino, I-10125 (Italy)
2016-10-10
We describe examples of drastic truncations of conformal bootstrap equations encoding much more information than that obtained by a direct numerical approach. A three-term truncation of the four point function of a free scalar in any space dimensions provides algebraic identities among conformal block derivatives which generate the exact spectrum of the infinitely many primary operators contributing to it. In boundary conformal field theories, we point out that the appearance of free parameters in the solutions of bootstrap equations is not an artifact of truncations, rather it reflects a physical property of permeable conformal interfaces which are described by the same equations. Surface transitions correspond to isolated points in the parameter space. We are able to locate them in the case of 3d Ising model, thanks to a useful algebraic form of 3d boundary bootstrap equations. It turns out that the low-lying spectra of the surface operators in the ordinary and the special transitions of 3d Ising model form two different solutions of the same polynomial equation. Their interplay yields an estimate of the surface renormalization group exponents, y{sub h}=0.72558(18) for the ordinary universality class and y{sub h}=1.646(2) for the special universality class, which compare well with the most recent Monte Carlo calculations. Estimates of other surface exponents as well as OPE coefficients are also obtained.
Clustered survival data with left-truncation
Eriksson, Frank; Martinussen, Torben; Scheike, Thomas H.
2015-01-01
Left-truncation occurs frequently in survival studies, and it is well known how to deal with this for univariate survival times. However, there are few results on how to estimate dependence parameters and regression effects in semiparametric models for clustered survival data with delayed entry....... Surprisingly, existing methods only deal with special cases. In this paper, we clarify different kinds of left-truncation and suggest estimators for semiparametric survival models under specific truncation schemes. The large-sample properties of the estimators are established. Small-sample properties...
... News Physician Resources Professions Site Index A-Z Nerve Blocks A nerve block is an injection to ... the limitations of Nerve Block? What is a Nerve Block? A nerve block is an anesthetic and/ ...
NLO renormalization in the Hamiltonian truncation
Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.
2017-09-01
Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.
Formal truncations of connected kernel equations
Dixon, R.M.
1977-01-01
The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems
New Schemes for Positive Real Truncation
Kari Unneland
2007-07-01
Full Text Available Model reduction, based on balanced truncation, of stable and of positive real systems are considered. An overview over some of the already existing techniques are given: Lyapunov balancing and stochastic balancing, which includes Riccati balancing. A novel scheme for positive real balanced truncation is then proposed, which is a combination of the already existing Lyapunov balancing and Riccati balancing. Using Riccati balancing, the solution of two Riccati equations are needed to obtain positive real reduced order systems. For the suggested method, only one Lyapunov equation and one Riccati equation are solved in order to obtain positive real reduced order systems, which is less computationally demanding. Further it is shown, that in order to get positive real reduced order systems, only one Riccati equation needs to be solved. Finally, this is used to obtain positive real frequency weighted balanced truncation.
Christensen, Lars P.B.; Larsen, Jan
2006-01-01
A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore...
An iterative reconstruction from truncated projection data
Anon.
1985-01-01
Various methods have been proposed for tomographic reconstruction from truncated projection data. In this paper, a reconstructive method is discussed which consists of iterations of filtered back-projection, reprojection and some nonlinear processings. First, the method is so constructed that it converges to a fixed point. Then, to examine its effectiveness, comparisons are made by computer experiments with two existing reconstructive methods for truncated projection data, that is, the method of extrapolation based on the smooth assumption followed by filtered back-projection, and modified additive ART
Stellar Disk Truncations: HI Density and Dynamics
Trujillo, Ignacio; Bakos, Judit
2010-06-01
Using HI Nearby Galaxy Survey (THINGS) 21-cm observations of a sample of nearby (nearly face-on) galaxies we explore whether the stellar disk truncation phenomenon produces any signature either in the HI gas density and/or in the gas dynamics. Recent cosmological simulations suggest that the origin of the break on the surface brightness distribution is produced by the appearance of a warp at the truncation position. This warp should produce a flaring on the gas distribution increasing the velocity dispersion of the HI component beyond the break. We do not find, however, any evidence of this increase in the gas velocity dispersion profile.
The effect of truncation on very small cardiac SPECT camera systems
Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.
2006-01-01
Background: The limited transaxial field-of-view (FOV) of a very small cardiac SPECT camera system causes view-dependent truncation of the projection of structures exterior to, but near the heart. Basic tomographic principles suggest that the reconstruction of non-attenuated truncated data gives a distortion-free image in the interior of the truncated region, but the DC term of the Fourier spectrum of the reconstructed image is incorrect, meaning that the intensity scale of the reconstruction is inaccurate. The purpose of this study was to characterize the reconstructed image artifacts from truncated data, and to quantify their effects on the measurement of tracer uptake in the myocardial. Particular attention was given to instances where the heart wall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heart region. Truncated and non-truncated projections were formed both with and without attenuation. The reconstructions were analyzed for artifacts in the myocardium caused by truncation, and for the effect that attenuation has relative to increasing those artifacts. Results: The inaccuracy due to truncation is primarily caused by an incorrect DC component. For visualizing the left ventricular wall, this error is not worse than the effect of attenuation. The addition of a small hot bowel-like structure near the left ventricle causes few changes in counts on the wall. Larger artifacts due to the truncation are located at the boundary of the truncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstruction results than an analytical filtered back-projection reconstruction algorithm. Conclusion: Small inaccuracies in reconstructed images from small FOV camera systems should have little effect on clinical interpretation. However, changes in the degree of inaccuracy in counts from slice to slice are due to changes in
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Truncation in diffraction pattern analysis. Pt. 1
Delhez, R.; Keijser, T.H. de; Mittemeijer, E.J.; Langford, J.I.
1986-01-01
An evaluation of the concept of a line profile is provoked by truncation of the range of intensity measurement in practice. The measured truncated line profile can be considered either as part of the total intensity distribution which peaks at or near the reciprocal-lattice points (approach 1), or as part of a component line profile which is confined to a single reciprocal-lattice point (approach 2). Some false conceptions in line-profile analysis can then be avoided and recipes can be developed for the extrapolation of the tails of the truncated line profile. Fourier analysis of line profiles, according to the first approach, implies a Fourier series development of the total intensity distribution defined within [l - 1/2, l + 1/2] (l indicates the node considered in reciprocal space); the second approach implies a Fourier transformation of the component line profile defined within [ - ∞, + ∞]. Exact descriptions of size broadening are provided by both approaches, whereas combined size and strain broadening can only be evaluated adequately within the first approach. Straightforward methods are given for obtaining truncation-corrected values for the average crystallite size. (orig.)
Family Therapy for the "Truncated" Nuclear Family.
Zuk, Gerald H.
1980-01-01
The truncated nuclear family consists of a two-generation group in which conflict has produced a polarization of values. The single-parent family is at special risk. Go-between process enables the therapist to depolarize sharply conflicted values and reduce pathogenic relating. (Author)
Solution of the Stieltjes truncated matrix moment problem
Vadim M. Adamyan
2005-01-01
Full Text Available The truncated Stieltjes matrix moment problem consisting in the description of all matrix distributions \\(\\boldsymbol{\\sigma}(t\\ on \\([0,\\infty\\ with given first \\(2n+1\\ power moments \\((\\mathbf{C}_j_{n=0}^j\\ is solved using known results on the corresponding Hamburger problem for which \\(\\boldsymbol{\\sigma}(t\\ are defined on \\((-\\infty,\\infty\\. The criterion of solvability of the Stieltjes problem is given and all its solutions in the non-degenerate case are described by selection of the appropriate solutions among those of the Hamburger problem for the same set of moments. The results on extensions of non-negative operators are used and a purely algebraic algorithm for the solution of both Hamburger and Stieltjes problems is proposed.
On truncations of the exact renormalization group
Morris, T R
1994-01-01
We investigate the Exact Renormalization Group (ERG) description of (Z_2 invariant) one-component scalar field theory, in the approximation in which all momentum dependence is discarded in the effective vertices. In this context we show how one can perform a systematic search for non-perturbative continuum limits without making any assumption about the form of the lagrangian. Concentrating on the non-perturbative three dimensional Wilson fixed point, we then show that the sequence of truncations n=2,3,\\dots, obtained by expanding about the field \\varphi=0 and discarding all powers \\varphi^{2n+2} and higher, yields solutions that at first converge to the answer obtained without truncation, but then cease to further converge beyond a certain point. No completely reliable method exists to reject the many spurious solutions that are also found. These properties are explained in terms of the analytic behaviour of the untruncated solutions -- which we describe in some detail.
Truncated Wigner dynamics and conservation laws
Drummond, Peter D.; Opanchuk, Bogdan
2017-10-01
Ultracold Bose gases can be used to experimentally test many-body theory predictions. Here we point out that both exact conservation laws and dynamical invariants exist in the topical case of the one-dimensional Bose gas, and these provide an important validation of methods. We show that the first four quantum conservation laws are exactly conserved in the approximate truncated Wigner approach to many-body quantum dynamics. Center-of-mass position variance is also exactly calculable. This is nearly exact in the truncated Wigner approximation, apart from small terms that vanish as N-3 /2 as N →∞ with fixed momentum cutoff. Examples of this are calculated in experimentally relevant, mesoscopic cases.
No chiral truncation of quantum log gravity?
Andrade, Tomás; Marolf, Donald
2010-03-01
At the classical level, chiral gravity may be constructed as a consistent truncation of a larger theory called log gravity by requiring that left-moving charges vanish. In turn, log gravity is the limit of topologically massive gravity (TMG) at a special value of the coupling (the chiral point). We study the situation at the level of linearized quantum fields, focussing on a unitary quantization. While the TMG Hilbert space is continuous at the chiral point, the left-moving Virasoro generators become ill-defined and cannot be used to define a chiral truncation. In a sense, the left-moving asymptotic symmetries are spontaneously broken at the chiral point. In contrast, in a non-unitary quantization of TMG, both the Hilbert space and charges are continuous at the chiral point and define a unitary theory of chiral gravity at the linearized level.
Truncated Dual-Cap Nucleation Site Development
Matson, Douglas M.; Sander, Paul J.
2012-01-01
During heterogeneous nucleation within a metastable mushy-zone, several geometries for nucleation site development must be considered. Traditional spherical dual cap and crevice models are compared to a truncated dual cap to determine the activation energy and critical cluster growth kinetics in ternary Fe-Cr-Ni steel alloys. Results of activation energy results indicate that nucleation is more probable at grain boundaries within the solid than at the solid-liquid interface.
On the Truncated Pareto Distribution with applications
Zaninetti, Lorenzo; Ferraro, Mario
2008-01-01
The Pareto probability distribution is widely applied in different fields such us finance, physics, hydrology, geology and astronomy. This note deals with an application of the Pareto distribution to astrophysics and more precisely to the statistical analysis of mass of stars and of diameters of asteroids. In particular a comparison between the usual Pareto distribution and its truncated version is presented. Finally a possible physical mechanism that produces Pareto tails for the distributio...
Efficient Tridiagonal Preconditioner for the Matrix-Free Truncated Newton Method
Lukšan, Ladislav; Vlček, Jan
2014-01-01
Roč. 235, 25 May (2014), s. 394-407 ISSN 0096-3003 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * matrix-free truncated Newton method * preconditioned conjugate gradient method * preconditioners obtained by the directional differentiation * numerical algorithms Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014
... page: //medlineplus.gov/ency/patientinstructions/000484.htm Epidural block - pregnancy To use the sharing features on this page, please enable JavaScript. An epidural block is a numbing medicine given by injection (shot) ...
N-terminally truncated POM121C inhibits HIV-1 replication.
Hideki Saito
Full Text Available Recent studies have identified host cell factors that regulate early stages of HIV-1 infection including viral cDNA synthesis and orientation of the HIV-1 capsid (CA core toward the nuclear envelope, but it remains unclear how viral DNA is imported through the nuclear pore and guided to the host chromosomal DNA. Here, we demonstrate that N-terminally truncated POM121C, a component of the nuclear pore complex, blocks HIV-1 infection. This truncated protein is predominantly localized in the cytoplasm, does not bind to CA, does not affect viral cDNA synthesis, reduces the formation of 2-LTR and diminished the amount of integrated proviral DNA. Studies with an HIV-1-murine leukemia virus (MLV chimeric virus carrying the MLV-derived Gag revealed that Gag is a determinant of this inhibition. Intriguingly, mutational studies have revealed that the blockade by N-terminally-truncated POM121C is closely linked to its binding to importin-β/karyopherin subunit beta 1 (KPNB1. These results indicate that N-terminally-truncated POM121C inhibits HIV-1 infection after completion of reverse transcription and before integration, and suggest an important role for KPNB1 in HIV-1 replication.
Smith, Martin H.
1992-01-01
Describes an educational game called "Population Blocks" that is designed to illustrate the concept of exponential growth of the human population and some potential effects of overpopulation. The game material consists of wooden blocks; 18 blocks are painted green (representing land), 7 are painted blue (representing water); and the remaining…
A Novel SCCA Approach via Truncated ℓ1-norm and Truncated Group Lasso for Brain Imaging Genetics.
Du, Lei; Liu, Kefei; Zhang, Tuo; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Han, Junwei; Guo, Lei; Saykin, Andrew J; Shen, Li
2017-09-18
Brain imaging genetics, which studies the linkage between genetic variations and structural or functional measures of the human brain, has become increasingly important in recent years. Discovering the bi-multivariate relationship between genetic markers such as single-nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is one major task in imaging genetics. Sparse Canonical Correlation Analysis (SCCA) has been a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants to induce sparsity. The ℓ 0 -norm penalty is a perfect sparsity-inducing tool which, however, is an NP-hard problem. In this paper, we propose the truncated ℓ 1 -norm penalized SCCA to improve the performance and effectiveness of the ℓ 1 -norm based SCCA methods. Besides, we propose an efficient optimization algorithms to solve this novel SCCA problem. The proposed method is an adaptive shrinkage method via tuning τ . It can avoid the time intensive parameter tuning if given a reasonable small τ . Furthermore, we extend it to the truncated group-lasso (TGL), and propose TGL-SCCA model to improve the group-lasso-based SCCA methods. The experimental results, compared with four benchmark methods, show that our SCCA methods identify better or similar correlation coefficients, and better canonical loading profiles than the competing methods. This demonstrates the effectiveness and efficiency of our methods in discovering interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/tlpscca/ . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Kunenkov, Erast V; Kononikhin, Alexey S; Perminova, Irina V; Hertkorn, Norbert; Gaspar, Andras; Schmitt-Kopplin, Philippe; Popov, Igor A; Garmash, Andrew V; Nikolaev, Evgeniy N
2009-12-15
The ultrahigh-resolution Fourier transform ion cyclotron resonance (FTICR) mass spectrum of natural organic matter (NOM) contains several thousand peaks with dozens of molecules matching the same nominal mass. Such a complexity poses a significant challenge for automatic data interpretation, in which the most difficult task is molecular formula assignment, especially in the case of heavy and/or multielement ions. In this study, a new universal algorithm for automatic treatment of FTICR mass spectra of NOM and humic substances based on total mass difference statistics (TMDS) has been developed and implemented. The algorithm enables a blind search for unknown building blocks (instead of a priori known ones) by revealing repetitive patterns present in spectra. In this respect, it differs from all previously developed approaches. This algorithm was implemented in designing FIRAN-software for fully automated analysis of mass data with high peak density. The specific feature of FIRAN is its ability to assign formulas to heavy and/or multielement molecules using "virtual elements" approach. To verify the approach, it was used for processing mass spectra of sodium polystyrene sulfonate (PSS, M(w) = 2200 Da) and polymethacrylate (PMA, M(w) = 3290 Da) which produce heavy multielement and multiply-charged ions. Application of TMDS identified unambiguously monomers present in the polymers consistent with their structure: C(8)H(7)SO(3)Na for PSS and C(4)H(6)O(2) for PMA. It also allowed unambiguous formula assignment to all multiply-charged peaks including the heaviest peak in PMA spectrum at mass 4025.6625 with charge state 6- (mass bias -0.33 ppm). Application of the TMDS-algorithm to processing data on the Suwannee River FA has proven its unique capacities in analysis of spectra with high peak density: it has not only identified the known small building blocks in the structure of FA such as CH(2), H(2), C(2)H(2)O, O but the heavier unit at 154.027 amu. The latter was
Fully 3D PET image reconstruction using a fourier preconditioned conjugate-gradient algorithm
Fessler, J.A.; Ficaro, E.P.
1996-01-01
Since the data sizes in fully 3D PET imaging are very large, iterative image reconstruction algorithms must converge in very few iterations to be useful. One can improve the convergence rate of the conjugate-gradient (CG) algorithm by incorporating preconditioning operators that approximate the inverse of the Hessian of the objective function. If the 3D cylindrical PET geometry were not truncated at the ends, then the Hessian of the penalized least-squares objective function would be approximately shift-invariant, i.e. G'G would be nearly block-circulant, where G is the system matrix. We propose a Fourier preconditioner based on this shift-invariant approximation to the Hessian. Results show that this preconditioner significantly accelerates the convergence of the CG algorithm with only a small increase in computation
Joint survival probability via truncated invariant copula
Kim, Jeong-Hoon; Ma, Yong-Ki; Park, Chan Yeol
2016-01-01
Highlights: • We have studied an issue of dependence structure between default intensities. • We use a multivariate shot noise intensity process, where jumps occur simultaneously and their sizes are correlated. • We obtain the joint survival probability of the integrated intensities by using a copula. • We apply our theoretical result to pricing basket default swap spread. - Abstract: Given an intensity-based credit risk model, this paper studies dependence structure between default intensities. To model this structure, we use a multivariate shot noise intensity process, where jumps occur simultaneously and their sizes are correlated. Through very lengthy algebra, we obtain explicitly the joint survival probability of the integrated intensities by using the truncated invariant Farlie–Gumbel–Morgenstern copula with exponential marginal distributions. We also apply our theoretical result to pricing basket default swap spreads. This result can provide a useful guide for credit risk management.
Shell model truncation schemes for rotational nuclei
Halse, P.; Jaqua, L.; Barrett, B.R.
1990-01-01
The suitability of the pair condensate approach for rotational states is studied in a single j = 17/2 shell of identical nucleons interacting through a quadrupole-quadrupole hamiltonian. The ground band and a K = 2 excited band are both studied in detail. A direct comparison of the exact states with those constituting the SD and SDG subspaces is used to identify the important degrees of freedom for these levels. The range of pairs necessary for a good description is found to be highly state dependent; S and D pairs are the major constituents of the low-spin ground band levels, while G pairs are needed for those in the γ-band. Energy spectra are obtained for each truncated subspace. SDG pairs allow accurate reproduction of the binding energy and K = 2 excitation energy, but still give a moment of inertia which is about 30% too small even for the lowest levels
Entanglement entropy from the truncated conformal space
T. Palmai
2016-08-01
Full Text Available A new numerical approach to entanglement entropies of the Rényi type is proposed for one-dimensional quantum field theories. The method extends the truncated conformal spectrum approach and we will demonstrate that it is especially suited to study the crossover from massless to massive behavior when the subsystem size is comparable to the correlation length. We apply it to different deformations of massless free fermions, corresponding to the scaling limit of the Ising model in transverse and longitudinal fields. For massive free fermions the exactly known crossover function is reproduced already in very small system sizes. The new method treats ground states and excited states on the same footing, and the applicability for excited states is illustrated by reproducing Rényi entropies of low-lying states in the transverse field Ising model.
Lee, Ho; Fahimian, Benjamin P.; Xing, Lei
2017-03-01
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
Analysis of truncation limit in probabilistic safety assessment
Cepin, Marko
2005-01-01
A truncation limit defines the boundaries of what is considered in the probabilistic safety assessment and what is neglected. The truncation limit that is the focus here is the truncation limit on the size of the minimal cut set contribution at which to cut off. A new method was developed, which defines truncation limit in probabilistic safety assessment. The method specifies truncation limits with more stringency than presenting existing documents dealing with truncation criteria in probabilistic safety assessment do. The results of this paper indicate that the truncation limits for more complex probabilistic safety assessments, which consist of larger number of basic events, should be more severe than presently recommended in existing documents if more accuracy is desired. The truncation limits defined by the new method reduce the relative errors of importance measures and produce more accurate results for probabilistic safety assessment applications. The reduced relative errors of importance measures can prevent situations, where the acceptability of change of equipment under investigation according to RG 1.174 would be shifted from region, where changes can be accepted, to region, where changes cannot be accepted, if the results would be calculated with smaller truncation limit
Massoullié, Grégoire; Bordachar, Pierre; Irles, Didier; Caussin, Christophe; Da Costa, Antoine; Defaye, Pascal; Jean, Frédéric; Mechulan, Alexis; Mondoly, Pierre; Souteyrand, Géraud; Pereira, Bruno; Ploux, Sylvain; Eschalier, Romain
2016-10-26
Percutaneous aortic valve replacement (transcatheter aortic valve implantation (TAVI)) notably increases the likelihood of the appearance of a complete left bundle branch block (LBBB) by direct lesion of the LBB of His. This block can lead to high-grade atrioventricular conduction disturbances responsible for a poorer prognosis. The management of this complication remains controversial. The screening of LBBB after TAVI persisting for more than 24 hours will be conducted by surface ECG. Stratification will be performed by post-TAVI intracardiac electrophysiological study. Patients at high risk of conduction disturbances (≥70 ms His-ventricle interval (HV) or presence of infra-Hisian block) will be implanted with a pacemaker enabling the recording of disturbance episodes. Those at lower risk (HV algorithm based on electrophysiological study and remote monitoring of CIEDs in the prediction of high-grade conduction disturbances in patients with LBBB after TAVI. The primary end point is to compare the incidence (rate and time to onset) of high-grade conduction disturbances in patients with LBBB after TAVI between the two groups at 12 months. Given the proportion of high-grade conduction disturbances (20-40%), a sample of 200 subjects will allow a margin of error of 6-7%. The LBBB-TAVI Study has been in an active recruiting phase since September 2015 (21 patients already included). Local ethics committee authorisation was obtained in May 2015. We will publish findings from this study in a peer-reviewed scientific journal and present results at national and international conferences. NCT02482844; Pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Application of a truncated normal failure distribution in reliability testing
Groves, C., Jr.
1968-01-01
Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.
Wigner distribution function of circularly truncated light beams
Bastiaans, M.J.; Nijhawan, O.P.; Gupta, A.K.; Musla, A.K.; Singh, Kehar
1998-01-01
Truncating a light beam is expressed as a convolution of its Wigner distribution function and the WDF of the truncating aperture. The WDF of a circular aperture is derived and an approximate expression - which is exact in the space and the spatial-frequency origin and whose integral over the spatial
Vortex breakdown in a truncated conical bioreactor
Balci, Adnan; Brøns, Morten [DTU Compute, Technical University of Denmark, DK-2800 Kgs. Lyngby (Denmark); Herrada, Miguel A [E.S.I, Universidad de Sevilla, Camino de los Descubrimientos s/n, E-41092 (Spain); Shtern, Vladimir N, E-mail: mobr@dtu.dk [Shtern Research and Consulting, Houston, TX 77096 (United States)
2015-12-15
This numerical study explains the eddy formation and disappearance in a slow steady axisymmetric air–water flow in a vertical truncated conical container, driven by the rotating top disk. Numerous topological metamorphoses occur as the water height, H{sub w}, and the bottom-sidewall angle, α, vary. It is found that the sidewall convergence (divergence) from the top to the bottom stimulates (suppresses) the development of vortex breakdown (VB) in both water and air. At α = 60°, the flow topology changes eighteen times as H{sub w} varies. The changes are due to (a) competing effects of AMF (the air meridional flow) and swirl, which drive meridional motions of opposite directions in water, and (b) feedback of water flow on AMF. For small H{sub w}, the AMF effect dominates. As H{sub w} increases, the swirl effect dominates and causes VB. The water flow feedback produces and modifies air eddies. The results are of fundamental interest and can be relevant for aerial bioreactors. (paper)
Vortex breakdown in a truncated conical bioreactor
Balci, Adnan; Brøns, Morten; Herrada, Miguel A; Shtern, Vladimir N
2015-01-01
This numerical study explains the eddy formation and disappearance in a slow steady axisymmetric air–water flow in a vertical truncated conical container, driven by the rotating top disk. Numerous topological metamorphoses occur as the water height, H w , and the bottom-sidewall angle, α, vary. It is found that the sidewall convergence (divergence) from the top to the bottom stimulates (suppresses) the development of vortex breakdown (VB) in both water and air. At α = 60°, the flow topology changes eighteen times as H w varies. The changes are due to (a) competing effects of AMF (the air meridional flow) and swirl, which drive meridional motions of opposite directions in water, and (b) feedback of water flow on AMF. For small H w , the AMF effect dominates. As H w increases, the swirl effect dominates and causes VB. The water flow feedback produces and modifies air eddies. The results are of fundamental interest and can be relevant for aerial bioreactors. (paper)
Bezak, A.
1987-01-01
A diagram is given of a detection block used for monitoring burnup of nuclear reactor fuel. A shielding block is an important part of the detection block. It stabilizes the fuel assembly in the fixing hole in front of a collimator where a suitable gamma beam is defined for gamma spectrometry determination of fuel burnup. The detector case and a neutron source case are placed on opposite sides of the fixing hole. For neutron measurement for which the water in the tank is used as a moderator, the neutron detector-fuel assembly configuration is selected such that neutrons from spontaneous fission and neutrons induced with the neutron source can both be measured. The patented design of the detection block permits longitudinal travel and rotation of the fuel assembly to any position, and thus more reliable determination of nuclear fuel burnup. (E.S.). 1 fig
A Method for Improving the Progressive Image Coding Algorithms
Ovidiu COSMA
2014-12-01
Full Text Available This article presents a method for increasing the performance of the progressive coding algorithms for the subbands of images, by representing the coefficients with a code that reduces the truncation error.
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Tailor-made dimensions of diblock copolymer truncated micelles on a solid by UV irradiation.
Liou, Jiun-You; Sun, Ya-Sen
2015-09-28
We investigated the structural evolution of truncated micelles in ultrathin films of polystyrene-block-poly(2-vinylpyridine), PS-b-P2VP, of monolayer thickness on bare silicon substrates (SiOx/Si) upon UV irradiation in air- (UVIA) and nitrogen-rich (UVIN) environments. The structural evolution of micelles upon UV irradiation was monitored using GISAXS measurements in situ, while the surface morphology was probed using atomic force microscopy ex situ and the chemical composition using X-ray photoelectron spectroscopy (XPS). This work provides clear evidence for the interpretation of the relationship between the structural evolution and photochemical reactions in PS-b-P2VP truncated micelles upon UVIA and UVIN. Under UVIA treatment, photolysis and cross-linking reactions coexisted within the micelles; photolysis occurred mainly at the top of the micelles, whereas cross-linking occurred preferentially at the bottom. The shape and size of UVIA-treated truncated micelles were controlled predominantly by oxidative photolysis reactions, which depended on the concentration gradient of free radicals and oxygen along the micelle height. Because of an interplay between photolysis and photo-crosslinking, the scattering length densities (SLD) of PS and P2VP remained constant. In contrast, UVIN treatments enhanced the contrast in SLD between the PS shell and the P2VP core as cross-linking dominated over photolysis in the presence of nitrogen. The enhancement of the SLD contrast was due to the various degrees of cross-linking under UVIN for the PS and P2VP blocks.
Evolution of truncated moments of singlet parton distributions
Forte, S.; Magnea, L.; Piccione, A.; Ridolfi, G.
2001-01-01
We define truncated Mellin moments of parton distributions by restricting the integration range over the Bjorken variable to the experimentally accessible subset x 0 ≤x≤1 of the allowed kinematic range 0≤x≤1. We derive the evolution equations satisfied by truncated moments in the general (singlet) case in terms of an infinite triangular matrix of anomalous dimensions which couple each truncated moment to all higher moments with orders differing by integers. We show that the evolution of any moment can be determined to arbitrarily good accuracy by truncating the system of coupled moments to a sufficiently large but finite size, and show how the equations can be solved in a way suitable for numerical applications. We discuss in detail the accuracy of the method in view of applications to precision phenomenology
Flexible scheme to truncate the hierarchy of pure states.
Zhang, P-P; Bentley, C D B; Eisfeld, A
2018-04-07
The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.
Measuring a Truncated Disk in Aquila X-1
King, Ashley L.; Tomsick, John A.; Miller, Jon M.; Chenevez, Jerome; Barret, Didier; Boggs, Steven E.; Chakrabarty, Deepto; Christensen, Finn E.; Craig, William W.; Feurst, Felix;
2016-01-01
We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe K(alpha) line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner radius of 15 +/- 3RG. The disk is likely truncated by either the boundary layer and/or a magnetic field. Associating the truncated inner disk with pressure from a magnetic field gives an upper limit of B < 5+/- 2x10(exp 8) G. Although the radius is truncated far from the stellar surface, material is still reaching the neutron star surface as evidenced by the X-ray burst present in the NuSTAR observation.
Squeezing in multi-mode nonlinear optical state truncation
Said, R.S.; Wahiddin, M.R.B.; Umarov, B.A.
2007-01-01
In this Letter, we show that multi-mode qubit states produced via nonlinear optical state truncation driven by classical external pumpings exhibit squeezing condition. We restrict our discussions to the two- and three-mode cases
Investigation of propagation dynamics of truncated vector vortex beams.
Srinivas, P; Perumangatt, C; Lal, Nijil; Singh, R P; Srinivasan, B
2018-06-01
In this Letter, we experimentally investigate the propagation dynamics of truncated vector vortex beams generated using a Sagnac interferometer. Upon focusing, the truncated vector vortex beam is found to regain its original intensity structure within the Rayleigh range. In order to explain such behavior, the propagation dynamics of a truncated vector vortex beam is simulated by decomposing it into the sum of integral charge beams with associated complex weights. We also show that the polarization of the truncated composite vector vortex beam is preserved all along the propagation axis. The experimental observations are consistent with theoretical predictions based on previous literature and are in good agreement with our simulation results. The results hold importance as vector vortex modes are eigenmodes of the optical fiber.
Truncated Newton-Raphson Methods for Quasicontinuum Simulations
Liang, Yu; Kanapady, Ramdev; Chung, Peter W
2006-01-01
.... In this research, we report the effectiveness of the truncated Newton-Raphson method and quasi-Newton method with low-rank Hessian update strategy that are evaluated against the full Newton-Raphson...
Truncation Depth Rule-of-Thumb for Convolutional Codes
Moision, Bruce
2009-01-01
In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.
Flexible scheme to truncate the hierarchy of pure states
Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.
2018-04-01
The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.
On the propagation of truncated localized waves in dispersive silica
Salem, Mohamed; Bagci, Hakan
2010-01-01
Propagation characteristics of truncated Localized Waves propagating in dispersive silica and free space are numerically analyzed. It is shown that those characteristics are affected by the changes in the relation between the transverse spatial
Enhancing propagation characteristics of truncated localized waves in silica
Salem, Mohamed
2011-07-01
The spectral characteristics of truncated Localized Waves propagating in dispersive silica are analyzed. Numerical experiments show that the immunity of the truncated Localized Waves propagating in dispersive silica to decay and distortion is enhanced as the non-linearity of the relation between the transverse spatial spectral components and the wave vector gets stronger, in contrast to free-space propagating waves, which suffer from early decay and distortion. © 2011 IEEE.
Mannila, H; Koivisto, M; Perola, M; Varilo, T; Hennah, W; Ekelund, J; Lukk, M; Peltonen, L; Ukkonen, E
2003-07-01
We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates.
The Stars and Gas in Outer Parts of Galaxy Disks : Extended or Truncated, Flat or Warped?
van der Kruit, P. C.; Funes, JG; Corsini, EM
2008-01-01
I review observations of truncations of stellar disks and models for their origin, compare observations of truncations in moderately inclined galaxies to those in edge-on systems and discuss the relation between truncations and H I-warps and their systematics and origin. Truncations are a common
Truncated predictor feedback for time-delay systems
Zhou, Bin
2014-01-01
This book provides a systematic approach to the design of predictor based controllers for (time-varying) linear systems with either (time-varying) input or state delays. Differently from those traditional predictor based controllers, which are infinite-dimensional static feedback laws and may cause difficulties in their practical implementation, this book develops a truncated predictor feedback (TPF) which involves only finite dimensional static state feedback. Features and topics: A novel approach referred to as truncated predictor feedback for the stabilization of (time-varying) time-delay systems in both the continuous-time setting and the discrete-time setting is built systematically Semi-global and global stabilization problems of linear time-delay systems subject to either magnitude saturation or energy constraints are solved in a systematic manner Both stabilization of a single system and consensus of a group of systems (multi-agent systems) are treated in a unified manner by applying the truncated pre...
Probability distributions with truncated, log and bivariate extensions
Thomopoulos, Nick T
2018-01-01
This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...
Mannila, H.; Koivisto, M.; Perola, M.; Varilo, T.; Hennah, W.; Ekelund, J.; Lukk, M.; Peltonen, L.; Ukkonen, E.
2003-01-01
We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the ...
Riesz Representation Theorem on Bilinear Spaces of Truncated Laurent Series
Sabarinsyah
2017-06-01
Full Text Available In this study a generalization of the Riesz representation theorem on non-degenerate bilinear spaces, particularly on spaces of truncated Laurent series, was developed. It was shown that any linear functional on a non-degenerate bilinear space is representable by a unique element of the space if and only if its kernel is closed. Moreover an explicit equivalent condition can be identiﬁed for the closedness property of the kernel when the bilinear space is a space of truncated Laurent series.
Frequency interval balanced truncation of discrete-time bilinear systems
Jazlan, Ahmad; Sreeram, Victor; Shaker, Hamid Reza
2016-01-01
This paper presents the development of a new model reduction method for discrete-time bilinear systems based on the balanced truncation framework. In many model reduction applications, it is advantageous to analyze the characteristics of the system with emphasis on particular frequency intervals...... are the solution to a pair of new generalized Lyapunov equations. The conditions for solvability of these new generalized Lyapunov equations are derived and a numerical solution method for solving these generalized Lyapunov equations is presented. Numerical examples which illustrate the usage of the new...... generalized frequency interval controllability and observability gramians as part of the balanced truncation framework are provided to demonstrate the performance of the proposed method....
Schlink, Uwe; Ragas, Ad M.J.
2011-01-01
Receptor-oriented approaches can assess the individual-specific exposure to air pollution. In such an individual-based model we analyse the impact of human mobility to the personal exposure that is perceived by individuals simulated in an exemplified urban area. The mobility models comprise random walk (reference point mobility, RPM), truncated Levy flights (TLF), and agenda-based walk (RPMA). We describe and review the general concepts and provide an inter-comparison of these concepts. Stationary and ergodic behaviour are explained and applied as well as performance criteria for a comparative evaluation of the investigated algorithms. We find that none of the studied algorithm results in purely random trajectories. TLF and RPMA prove to be suitable for human mobility modelling, because they provide conditions for very individual-specific trajectories and exposure. Suggesting these models we demonstrate the plausibility of their results for exposure to air-borne benzene and the combined exposure to benzene and nonane. - Highlights: → Human exposure to air pollutants is influenced by a person's movement in the urban area. → We provide a simulation study of approaches to modelling personal exposure. → Agenda-based models and truncated Levy flights are recommended for exposure assessment. → The procedure is demonstrated for benzene exposure in an urban region. - Truncated Levy flights and agenda-based mobility are useful for the assessment of personal human exposure.
Generation of truncated recombinant form of tumor necrosis factor ...
7. Original Research Article. Generation of truncated recombinant form of tumor necrosis factor ... as 6×His tagged using E.coli BL21 (DE3) expression system. The protein was ... proapoptotic signaling cascade through TNFR1. [5] which is ...
Measuring a truncated disk in Aquila X-1
King, Ashley L.; Tomsick, John A.; Miller, Jon M.
2016-01-01
We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe Kα line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner r...
Scavenger receptor AI/II truncation, lung function and COPD
Thomsen, M; Nordestgaard, B G; Tybjaerg-Hansen, A
2011-01-01
The scavenger receptor A-I/II (SRA-I/II) on alveolar macrophages is involved in recognition and clearance of modified lipids and inhaled particulates. A rare variant of the SRA-I/II gene, Arg293X, truncates the distal collagen-like domain, which is essential for ligand recognition. We tested whet...
Maximum nondiffracting propagation distance of aperture-truncated Airy beams
Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu
2018-05-01
Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.
Multiple-scattering theory with a truncated basis set
Zhang, X.; Butler, W.H.
1992-01-01
Multiple-scattering theory (MST) is an extremely efficient technique for calculating the electronic structure of an assembly of atoms. The wave function in MST is expanded in terms of spherical waves centered on each atom and indexed by their orbital and azimuthal quantum numbers, l and m. The secular equation which determines the characteristic energies can be truncated at a value of the orbital angular momentum l max , for which the higher angular momentum phase shifts, δ l (l>l max ), are sufficiently small. Generally, the wave-function coefficients which are calculated from the secular equation are also truncated at l max . Here we point out that this truncation of the wave function is not necessary and is in fact inconsistent with the truncation of the secular equation. A consistent procedure is described in which the states with higher orbital angular momenta are retained but with their phase shifts set to zero. We show that this treatment gives smooth, continuous, and correctly normalized wave functions and that the total charge density calculated from the corresponding Green function agrees with the Lloyd formula result. We also show that this augmented wave function can be written as a linear combination of Andersen's muffin-tin orbitals in the case of muffin-tin potentials, and can be used to generalize the muffin-tin orbital idea to full-cell potentals
Analytic Method for Pressure Recovery in Truncated Diffusers ...
A prediction method is presented for the static pressure recovery in subsonic axisymmetric truncated conical diffusers. In the analysis, a turbulent boundary layer is assumed at the diffuser inlet and a potential core exists throughout the flow. When flow separation occurs, this approach cannot be used to predict the maximum ...
Modifications of Geometric Truncation of the Scattering Phase Function
Radkevich, A.
2017-12-01
Phase function (PF) of light scattering on large atmospheric particles has very strong peak in forward direction constituting a challenge for accurate numerical calculations of radiance. Such accurate (and fast) evaluations are important in the problems of remote sensing of the atmosphere. Scaling transformation replaces original PF with a sum of the delta function and a new regular smooth PF. A number of methods to construct such a PF were suggested. Delta-M and delta-fit methods require evaluation of the PF moments which imposes a numerical problem if strongly anisotropic PF is given as a function of angle. Geometric truncation keeps the original PF unchanged outside the forward peak cone replacing it with a constant within the cone. This approach is designed to preserve the asymmetry parameter. It has two disadvantages: 1) PF has discontinuity at the cone; 2) the choice of the cone is subjective, no recommendations were provided on the choice of the truncation angle. This choice affects both truncation fraction and the value of the phase function within the forward cone. Both issues are addressed in this study. A simple functional form of the replacement PF is suggested. This functional form allows for a number of modifications. This study consider 3 versions providing continuous PF. The considered modifications also bear either of three properties: preserve asymmetry parameter, provide continuity of the 1st derivative of the PF, and preserve mean scattering angle. The second problem mentioned above is addressed with a heuristic approach providing unambiguous criterion of selection of the truncation angle. The approach showed good performance on liquid water and ice clouds with different particle size distributions. Suggested modifications were tested on different cloud PFs using both discrete ordinates and Monte Carlo methods. It was showed that the modifications provide better accuracy of the radiance computation compare to the original geometric truncation.
Design Optimization for a Truncated Catenary Mooring System for Scale Model Test
Climent Molins
2015-11-01
Full Text Available One of the main aspects when testing floating offshore platforms is the scaled mooring system, particularly with the increased depths where such platforms are intended. The paper proposes the use of truncated mooring systems to emulate the real mooring system by solving an optimization problem. This approach could be an interesting option when the existing testing facilities do not have enough available space. As part of the development of a new spar platform made of concrete for Floating Offshore Wind Turbines (FOWTs, called Windcrete, a station keeping system with catenary shaped lines was selected. The test facility available for the planned experiments had an important width constraint. Then, an algorithm to optimize the design of the scaled truncated mooring system using different weights of lines was developed. The optimization process adjusts the quasi-static behavior of the scaled mooring system as much as possible to the real mooring system within its expected maximum displacement range, where the catenary line provides the restoring forces by its suspended line length.
Direct block scheduling technology: Analysis of Avidity
Felipe Ribeiro Souza
Full Text Available Abstract This study is focused on Direct Block Scheduling testing (Direct Multi-Period Scheduling methodology which schedules mine production considering the correct discount factor of each mining block, resulting in the final pit. Each block is analyzed individually in order to define the best target period. This methodology presents an improvement of the classical methodology derived from Lerchs-Grossmann's initial proposition improved by Whittle. This paper presents the differences between these methodologies, specially focused on the algorithms' avidity. Avidity is classically defined by the voracious search algorithms, whereupon some of the most famous greedy algorithms are Branch and Bound, Brutal Force and Randomized. Strategies based on heuristics can accentuate the voracity of the optimizer system. The applied algorithm use simulated annealing combined with Tabu Search. The most avid algorithm can select the most profitable blocks in early periods, leading to higher present value in the first periods of mine operation. The application of discount factors to blocks on the Lerchs-Grossmann's final pit has an accentuated effect with time, and this effect may make blocks scheduled for the end of the mine life unfeasible, representing a trend to a decrease in reported reserves.
Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution
Weitiao Wu
2013-01-01
Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.
On the Analytical and Numerical Properties of the Truncated Laplace Transform
2014-05-01
classical study of the truncated Fourier trans- form. The resulting algorithms are applicable to all environments likely to be encountered in applications...other words, (((La,b)∗ ◦ La,b) (un)) (t) = ∫ b a 1 t+ s un(s)ds = α 2 nun (t). (2.69) Observation 2.22. Similarly, La,b ◦ (La,b)∗ of a function g ∈ L2(0...3.20)) are even and odd functions in the regular sense: Un(s) = (Cγ(un)) (s) = (−1) nUn (−s). (3.25) In particular, at the point s = 0, we have: U2j+1(0
Du, Lei; Zhang, Tuo; Liu, Kefei; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Guo, Lei; Saykin, Andrew J; Shen, Li
2016-01-01
Discovering bi-multivariate associations between genetic markers and neuroimaging quantitative traits is a major task in brain imaging genetics. Sparse Canonical Correlation Analysis (SCCA) is a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants. The ℓ 0 -norm is more desirable, which however remains unexplored since the ℓ 0 -norm minimization is NP-hard. In this paper, we impose the truncated ℓ 1 -norm to improve the performance of the ℓ 1 -norm based SCCA methods. Besides, we propose two efficient optimization algorithms and prove their convergence. The experimental results, compared with two benchmark methods, show that our method identifies better and meaningful canonical loading patterns in both simulated and real imaging genetic analyse.
Block-conjugate-gradient method
McCarthy, J.F.
1989-01-01
It is shown that by using the block-conjugate-gradient method several, say s, columns of the inverse Kogut-Susskind fermion matrix can be found simultaneously, in less time than it would take to run the standard conjugate-gradient algorithm s times. The method improves in efficiency relative to the standard conjugate-gradient algorithm as the fermion mass is decreased and as the value of the coupling is pushed to its limit before the finite-size effects become important. Thus it is potentially useful for measuring propagators in large lattice-gauge-theory calculations of the particle spectrum
Fenwick, John D.; Pardo-Montero, Juan
2010-01-01
Purpose: Homogenized blocked arcs are intuitively appealing as basis functions for multicriteria optimization of rotational radiotherapy. Such arcs avoid an organ-at-risk (OAR), spread dose out well over the rest-of-body (ROB), and deliver homogeneous doses to a planning target volume (PTV) using intensity modulated fluence profiles, obtainable either from closed-form solutions or iterative numerical calculations. Here, the analytic and iterative arcs are compared. Methods: Dose-distributions have been calculated for nondivergent beams, both including and excluding scatter, beam penumbra, and attenuation effects, which are left out of the derivation of the analytic arcs. The most straightforward analytic arc is created by truncating the well-known Brahme, Roos, and Lax (BRL) solution, cutting its uniform dose region down from an annulus to a smaller nonconcave region lying beyond the OAR. However, the truncation leaves behind high dose hot-spots immediately on either side of the OAR, generated by very high BRL fluence levels just beyond the OAR. These hot-spots can be eliminated using alternative analytical solutions ''C'' and ''L,'' which, respectively, deliver constant and linearly rising fluences in the gap region between the OAR and PTV (before truncation). Results: Measured in terms of PTV dose homogeneity, ROB dose-spread, and OAR avoidance, C solutions generate better arc dose-distributions than L when scatter, penumbra, and attenuation are left out of the dose modeling. Including these factors, L becomes the best analytical solution. However, the iterative approach generates better dose-distributions than any of the analytical solutions because it can account and compensate for penumbra and scatter effects. Using the analytical solutions as starting points for the iterative methodology, dose-distributions almost as good as those obtained using the conventional iterative approach can be calculated very rapidly. Conclusions: The iterative methodology is
Propagation of truncated modified Laguerre-Gaussian beams
Deng, D.; Li, J.; Guo, Q.
2010-01-01
By expanding the circ function into a finite sum of complex Gaussian functions and applying the Collins formula, the propagation of hard-edge diffracted modified Laguerre-Gaussian beams (MLGBs) through a paraxial ABCD system is studied, and the approximate closed-form propagation expression of hard-edge diffracted MLGBs is obtained. The transverse intensity distribution of the MLGB carrying finite power can be characterized by a single bright and symmetric ring during propagation when the aperture radius is very large. Starting from the definition of the generalized truncated second-order moments, the beam quality factor of MLGBs through a hard-edged circular aperture is investigated in a cylindrical coordinate system, which turns out to be dependent on the truncated radius and the beam orders.
Rotating D0-branes and consistent truncations of supergravity
Anabalón, Andrés; Ortiz, Thomas; Samtleben, Henning
2013-01-01
The fluctuations around the D0-brane near-horizon geometry are described by two-dimensional SO(9) gauged maximal supergravity. We work out the U(1) 4 truncation of this theory whose scalar sector consists of five dilaton and four axion fields. We construct the full non-linear Kaluza–Klein ansatz for the embedding of the dilaton sector into type IIA supergravity. This yields a consistent truncation around a geometry which is the warped product of a two-dimensional domain wall and the sphere S 8 . As an application, we consider the solutions corresponding to rotating D0-branes which in the near-horizon limit approach AdS 2 ×M 8 geometries, and discuss their thermodynamical properties. More generally, we study the appearance of such solutions in the presence of non-vanishing axion fields
Intersection spaces, spatial homology truncation, and string theory
Banagl, Markus
2010-01-01
Intersection cohomology assigns groups which satisfy a generalized form of Poincaré duality over the rationals to a stratified singular space. The present monograph introduces a method that assigns to certain classes of stratified spaces cell complexes, called intersection spaces, whose ordinary rational homology satisfies generalized Poincaré duality. The cornerstone of the method is a process of spatial homology truncation, whose functoriality properties are analyzed in detail. The material on truncation is autonomous and may be of independent interest to homotopy theorists. The cohomology of intersection spaces is not isomorphic to intersection cohomology and possesses algebraic features such as perversity-internal cup-products and cohomology operations that are not generally available for intersection cohomology. A mirror-symmetric interpretation, as well as applications to string theory concerning massless D-branes arising in type IIB theory during a Calabi-Yau conifold transition, are discussed.
Rotating D0-branes and consistent truncations of supergravity
Anabalón, Andrés [Departamento de Ciencias, Facultad de Artes Liberales, Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Av. Padre Hurtado 750, Viña del Mar (Chile); Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS École Normale Supérieure de Lyon 46, allée d' Italie, F-69364 Lyon cedex 07 (France); Ortiz, Thomas; Samtleben, Henning [Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS École Normale Supérieure de Lyon 46, allée d' Italie, F-69364 Lyon cedex 07 (France)
2013-12-18
The fluctuations around the D0-brane near-horizon geometry are described by two-dimensional SO(9) gauged maximal supergravity. We work out the U(1){sup 4} truncation of this theory whose scalar sector consists of five dilaton and four axion fields. We construct the full non-linear Kaluza–Klein ansatz for the embedding of the dilaton sector into type IIA supergravity. This yields a consistent truncation around a geometry which is the warped product of a two-dimensional domain wall and the sphere S{sup 8}. As an application, we consider the solutions corresponding to rotating D0-branes which in the near-horizon limit approach AdS{sub 2}×M{sub 8} geometries, and discuss their thermodynamical properties. More generally, we study the appearance of such solutions in the presence of non-vanishing axion fields.
Generation of truncated recombinant form of tumor necrosis factor ...
Purpose: To produce truncated recombinant form of tumor necrosis factor receptor 1 (TNFR1), cysteine-rich domain 2 (CRD2) and CRD3 regions of the receptor were generated using pET28a and E. coli/BL21. Methods: DNA coding sequence of CRD2 and CRD3 was cloned into pET28a vector and the corresponding ...
A SUZAKU OBSERVATION OF NGC 4593: ILLUMINATING THE TRUNCATED DISK
Markowitz, A. G.; Reeves, J. N.
2009-01-01
We report results from a 2007 Suzaku observation of the Seyfert 1 AGN NGC 4593. The narrow Fe Kα emission line has a FWHM width ∼ 4000 km s -1 , indicating emission from ∼> 5000 R g . There is no evidence for a relativistically broadened Fe K line, consistent with the presence of a radiatively efficient outer disk which is truncated or transitions to an interior radiatively inefficient flow. The Suzaku observation caught the source in a low-flux state; comparison to a 2002 XMM-Newton observation indicates that the hard X-ray flux decreased by 3.6, while the Fe Kα line intensity and width σ each roughly halved. Two model-dependent explanations for the changes in Fe Kα line profile are explored. In one, the Fe Kα line width has decreased from ∼10,000 to ∼4000 km s -1 from 2002 to 2007, suggesting that the thin disk truncation/transition radius has increased from 1000-2000 to ∼>5000 R g . However, there are indications from other compact accreting systems that such truncation radii tend to be associated only with accretion rates relative to Eddington much lower than that of NGC 4593. In the second model, the line profile in the XMM-Newton observation consists of a time-invariant narrow component plus a broad component originating from the inner part of the truncated disk (∼300 R g ) which has responded to the drop in continuum flux. The Compton reflection component strength R is ∼ 1.1, consistent with the measured Fe Kα line total equivalent width with an Fe abundance 1.7 times the solar value. The modest soft excess, modeled well by either thermal bremsstrahlung emission or by Comptonization of soft seed photons in an optical thin plasma, has fallen by a factor of ∼20 from 2002 to 2007, ruling out emission from a region 5 lt-yr in size.
Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series
Zhang, Zhihua
2014-01-01
Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842
Filter Factors of Truncated TLS Regularization with Multiple Observations
Hnětynková, I.; Plešinger, Martin; Žáková, J.
2017-01-01
Roč. 62, č. 2 (2017), s. 105-120 ISSN 0862-7940 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : truncated total least squares * multiple right-hand sides * eigenvalues of rank-d update * ill-posed problem * regularization * filter factors Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.618, year: 2016 http://hdl.handle.net/10338.dmlcz/146698
Patra, M.; Karttunen, M.E.J.; Hyvönen, M.T.; Falck, E.; Lindqvist, P.; Vattulainen, I.
2003-01-01
We study the influence of truncating the electrostatic interactions in a fully hydrated pure dipalmitoylphosphatidylcholine (DPPC) bilayer through 20 ns molecular dynamics simulations. The computations in which the electrostatic interactions were truncated are compared to similar simulations using
A protein-truncating R179X variant in RNF186 confers protection against ulcerative colitis
Rivas, Manuel A.; Graham, Daniel; Sulem, Patrick; Stevens, Christine; Desch, A. Nicole; Goyette, Philippe; Gudbjartsson, Daniel; Jonsdottir, Ingileif; Thorsteinsdottir, Unnur; Degenhardt, Frauke; Mucha, Soeren; Kurki, Mitja I.; Li, Dalin; D'Amato, Mauro; Annese, Vito; Vermeire, Severine; Weersma, Rinse K.; Halfvarson, Jonas; Paavola-Sakki, Paulina; Lappalainen, Maarit; Lek, Monkol; Cummings, Beryl; Tukiainen, Taru; Haritunians, Talin; Halme, Leena; Koskinen, Lotta L. E.; Ananthakrishnan, Ashwin N.; Luo, Yang; Heap, Graham A.; Visschedijk, Marijn C.; MacArthur, Daniel G.; Neale, Benjamin M.; Ahmad, Tariq; Anderson, Carl A.; Brant, Steven R.; Duerr, Richard H.; Silverberg, Mark S.; Cho, Judy H.; Palotie, Aarno; Saavalainen, Paivi; Kontula, Kimmo; Farkkila, Martti; McGovern, Dermot P. B.; Franke, Andre; Stefansson, Kari; Rioux, John D.; Xavier, Ramnik J.; Daly, Mark J.
Protein-truncating variants protective against human disease provide in vivo validation of therapeutic targets. Here we used targeted sequencing to conduct a search for protein-truncating variants conferring protection against inflammatory bowel disease exploiting knowledge of common variants
Evidence for Truncated Exponential Probability Distribution of Earthquake Slip
Thingbaijam, Kiran Kumar; Mai, Paul Martin
2016-01-01
Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.
Evidence for Truncated Exponential Probability Distribution of Earthquake Slip
Thingbaijam, Kiran K. S.
2016-07-13
Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.
Sanghavi, Suniti; Stephens, Graeme
2015-01-01
In the presence of aerosol and/or clouds, the use of appropriate truncation methods becomes indispensable for accurate but cost-efficient radiative transfer computations. Truncation methods allow the reduction of the large number (usually several hundreds) of Fourier components associated with particulate scattering functions to a more manageable number, thereby making it possible to carry out radiative transfer computations with a modest number of streams. While several truncation methods have been discussed for scalar radiative transfer, few rigorous studies have been made of truncation methods for the vector case. Here, we formally derive the vector form of Wiscombe's delta-m truncation method. Two main sources of error associated with delta-m truncation are identified as the delta-separation error (DSE) and the phase-truncation error (PTE). The view angles most affected by truncation error occur in the vicinity of the direction of exact backscatter. This view geometry occurs commonly in satellite based remote sensing applications, and is hence of considerable importance. In order to deal with these errors, we adapt the δ-fit approach of Hu et al. (2000) [17] to vector radiative transfer. The resulting δBGE-fit is compared with the vectorized delta-m method. For truncation at l=25 of an original phase matrix consisting of over 300 Fourier components, the use of the δBGE-fit minimizes the error due to truncation at these view angles, while practically eliminating error at other angles. We also show how truncation errors have a distorting effect on hyperspectral absorption line shapes. The choice of the δBGE-fit method over delta-m truncation minimizes errors in absorption line depths, thus affording greater accuracy for sensitive retrievals such as those of XCO 2 from OCO-2 or GOSAT measurements. - Highlights: • Derives vector form for delta-m truncation method. • Adapts δ-fit truncation approach to vector RTE as δBGE-fit. • Compares truncation
Algorithms in combinatorial design theory
Colbourn, CJ
1985-01-01
The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Optimal block-tridiagonalization of matrices for coherent charge transport
Wimmer, Michael; Richter, Klaus
2009-01-01
Numerical quantum transport calculations are commonly based on a tight-binding formulation. A wide class of quantum transport algorithms require the tight-binding Hamiltonian to be in the form of a block-tridiagonal matrix. Here, we develop a matrix reordering algorithm based on graph partitioning techniques that yields the optimal block-tridiagonal form for quantum transport. The reordered Hamiltonian can lead to significant performance gains in transport calculations, and allows to apply conventional two-terminal algorithms to arbitrarily complex geometries, including multi-terminal structures. The block-tridiagonalization algorithm can thus be the foundation for a generic quantum transport code, applicable to arbitrary tight-binding systems. We demonstrate the power of this approach by applying the block-tridiagonalization algorithm together with the recursive Green's function algorithm to various examples of mesoscopic transport in two-dimensional electron gases in semiconductors and graphene.
Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm
Jin, Ick Hoon; Liang, Faming
2013-01-01
The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing
Ultrasound guided supraclavicular block.
Hanumanthaiah, Deepak
2013-09-01
Ultrasound guided regional anaesthesia is becoming increasingly popular. The supraclavicular block has been transformed by ultrasound guidance into a potentially safe superficial block. We reviewed the techniques of performing supraclavicular block with special focus on ultrasound guidance.
LGI2 truncation causes a remitting focal epilepsy in dogs.
Eija H Seppälä
2011-07-01
Full Text Available One quadrillion synapses are laid in the first two years of postnatal construction of the human brain, which are then pruned until age 10 to 500 trillion synapses composing the final network. Genetic epilepsies are the most common neurological diseases with onset during pruning, affecting 0.5% of 2-10-year-old children, and these epilepsies are often characterized by spontaneous remission. We previously described a remitting epilepsy in the Lagotto romagnolo canine breed. Here, we identify the gene defect and affected neurochemical pathway. We reconstructed a large Lagotto pedigree of around 34 affected animals. Using genome-wide association in 11 discordant sib-pairs from this pedigree, we mapped the disease locus to a 1.7 Mb region of homozygosity in chromosome 3 where we identified a protein-truncating mutation in the Lgi2 gene, a homologue of the human epilepsy gene LGI1. We show that LGI2, like LGI1, is neuronally secreted and acts on metalloproteinase-lacking members of the ADAM family of neuronal receptors, which function in synapse remodeling, and that LGI2 truncation, like LGI1 truncations, prevents secretion and ADAM interaction. The resulting epilepsy onsets at around seven weeks (equivalent to human two years, and remits by four months (human eight years, versus onset after age eight in the majority of human patients with LGI1 mutations. Finally, we show that Lgi2 is expressed highly in the immediate post-natal period until halfway through pruning, unlike Lgi1, which is expressed in the latter part of pruning and beyond. LGI2 acts at least in part through the same ADAM receptors as LGI1, but earlier, ensuring electrical stability (absence of epilepsy during pruning years, preceding this same function performed by LGI1 in later years. LGI2 should be considered a candidate gene for common remitting childhood epilepsies, and LGI2-to-LGI1 transition for mechanisms of childhood epilepsy remission.
On the propagation of truncated localized waves in dispersive silica
Salem, Mohamed
2010-01-01
Propagation characteristics of truncated Localized Waves propagating in dispersive silica and free space are numerically analyzed. It is shown that those characteristics are affected by the changes in the relation between the transverse spatial spectral components and the wave vector. Numerical experiments demonstrate that as the non-linearity of this relation gets stronger, the pulses propagating in silica become more immune to decay and distortion whereas the pulses propagating in free-space suffer from early decay and distortion. © 2010 Optical Society of America.
Truncated conformal space approach to scaling Lee-Yang model
Yurov, V.P.; Zamolodchikov, Al.B.
1989-01-01
A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs
Chaos and noise in a truncated Toda potential
Habib, S.; Kandrup, H.E.; Mahon, M.E.
1996-01-01
Results are reported from a numerical investigation of orbits in a truncated Toda potential that is perturbed by weak friction and noise. Aside from the perturbations displaying a simple scaling in the amplitude of the friction and noise, it is found that even very weak friction and noise can induce an extrinsic diffusion through cantori on a time scale that is much shorter than that associated with intrinsic diffusion in the unperturbed system. The results have applications in galactic dynamics and in the formation of a beam halo in charged particle beams. copyright 1996 The American Physical Society
Bootstrapped efficiency measures of oil blocks in Angola
Barros, C.P.; Assaf, A.
2009-01-01
This paper investigates the technical efficiency of Angola oil blocks over the period 2002-2007. A double bootstrap data envelopment analysis (DEA) model is adopted composed in the first stage of a DEA-variable returns to scale (VRS) model and then followed in the second stage by a bootstrapped truncated regression. Results showed that on average, the technical efficiency has fluctuated over the period of study, but deep and ultradeep oil blocks have generally maintained a consistent efficiency level. Policy implications are derived.
Homogeneous bilateral block shifts
Douglas class were classified in [3]; they are unilateral block shifts of arbitrary block size (i.e. dim H(n) can be anything). However, no examples of irreducible homogeneous bilateral block shifts of block size larger than 1 were known until now.
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Symmetric truncations of the shallow-water equations
Rouhi, A.; Abarbanel, H.D.I.
1993-01-01
Conservation of potential vorticity in Eulerian fluids reflects particle interchange symmetry in the Lagrangian fluid version of the same theory. The algebra associated with this symmetry in the shallow-water equations is studied here, and we give a method for truncating the degrees of freedom of the theory which preserves a maximal number of invariants associated with this algebra. The finite-dimensional symmetry associated with keeping only N modes of the shallow-water flow is SU(N). In the limit where the number of modes goes to infinity (N→∞) all the conservation laws connected with potential vorticity conservation are recovered. We also present a Hamiltonian which is invariant under this truncated symmetry and which reduces to the familiar shallow-water Hamiltonian when N→∞. All this provides a finite-dimensional framework for numerical work with the shallow-water equations which preserves not only energy and enstrophy but all other known conserved quantities consistent with the finite number of degrees of freedom. The extension of these ideas to other nearly two-dimensional flows is discussed
Learning Mixtures of Truncated Basis Functions from Data
Langseth, Helge; Nielsen, Thomas Dyhre; Pérez-Bernabé, Inmaculada
2014-01-01
In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a ke...... propose an alternative learning method that relies on the cumulative distribution function of the data. Empirical results demonstrate the usefulness of the approaches: Even though the methods produce estimators that are slightly poorer than the state of the art (in terms of log......In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing......-likelihood), they are significantly faster, and therefore indicate that the MoTBF framework can be used for inference and learning in reasonably sized domains. Furthermore, we show how a particular sub- class of MoTBF potentials (learnable by the proposed methods) can be exploited to significantly reduce complexity during inference....
Theoretical analysis of balanced truncation for linear switched systems
Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef
2012-01-01
In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians and their singu......In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians...... for showing this independence is realization theory of linear switched systems. [1] H. R. Shaker and R. Wisniewski, "Generalized gramian framework for model/controller order reduction of switched systems", International Journal of Systems Science, Vol. 42, Issue 8, 2011, 1277-1291. [2] H. R. Shaker and R....... Wisniewski, "Switched Systems Reduction Framework Based on Convex Combination of Generalized Gramians", Journal of Control Science and Engineering, 2009....
Firewalls as artefacts of inconsistent truncations of quantum geometries
Germani, Cristiano [Max-Planck-Institut fuer Physik, Muenchen (Germany); Arnold Sommerfeld Center, Ludwig-Maximilians-University, Muenchen (Germany); Institut de Ciencies del Cosmos, Universitat de Barcelona (Spain); Sarkar, Debajyoti [Max-Planck-Institut fuer Physik, Muenchen (Germany); Arnold Sommerfeld Center, Ludwig-Maximilians-University, Muenchen (Germany)
2016-01-15
In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order O(e{sup -S}), inexorably leads to a ''localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by (e{sup -S}) effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like (g{sup tt}e{sup -S}). Therefore, while at a distance far away from the horizon, where g{sup tt} ∼ 1, quantum corrections are perturbative, they do diverge close to the horizon, where g{sup tt} → ∞. Nevertheless, these ''corrections'' nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Firewalls as artefacts of inconsistent truncations of quantum geometries
Germani, Cristiano; Sarkar, Debajyoti
2016-01-01
In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order ${\\cal O}(e^{-S})$, inexorably leads to a "localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by ${\\cal O}(e^{-S})$ effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like ${\\cal O}( g^{tt} e^{-S})$. Therefore, while at a distance far away from the horizon, where $g^{tt}\\approx 1$, quantum corrections {\\it are} perturbative, they {\\it do} diverge close to the horizon, where $g^{tt}\\rightarrow \\infty$. Nevertheless, these "corrections" nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes.
Firewalls as artefacts of inconsistent truncations of quantum geometries
Germani, Cristiano; Sarkar, Debajyoti
2016-01-01
In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order O(e -S ), inexorably leads to a ''localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by (e -S ) effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like (g tt e -S ). Therefore, while at a distance far away from the horizon, where g tt ∼ 1, quantum corrections are perturbative, they do diverge close to the horizon, where g tt → ∞. Nevertheless, these ''corrections'' nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Hamiltonian truncation approach to quenches in the Ising field theory
T. Rakovszky
2016-10-01
Full Text Available In contrast to lattice systems where powerful numerical techniques such as matrix product state based methods are available to study the non-equilibrium dynamics, the non-equilibrium behaviour of continuum systems is much harder to simulate. We demonstrate here that Hamiltonian truncation methods can be efficiently applied to this problem, by studying the quantum quench dynamics of the 1+1 dimensional Ising field theory using a truncated free fermionic space approach. After benchmarking the method with integrable quenches corresponding to changing the mass in a free Majorana fermion field theory, we study the effect of an integrability breaking perturbation by the longitudinal magnetic field. In both the ferromagnetic and paramagnetic phases of the model we find persistent oscillations with frequencies set by the low-lying particle excitations not only for small, but even for moderate size quenches. In the ferromagnetic phase these particles are the various non-perturbative confined bound states of the domain wall excitations, while in the paramagnetic phase the single magnon excitation governs the dynamics, allowing us to capture the time evolution of the magnetisation using a combination of known results from perturbation theory and form factor based methods. We point out that the dominance of low lying excitations allows for the numerical or experimental determination of the mass spectra through the study of the quench dynamics.
A fast BDD algorithm for large coherent fault trees analysis
Jung, Woo Sik; Han, Sang Hoon; Ha, Jaejoo
2004-01-01
Although a binary decision diagram (BDD) algorithm has been tried to solve large fault trees until quite recently, they are not efficiently solved in a short time since the size of a BDD structure exponentially increases according to the number of variables. Furthermore, the truncation of If-Then-Else (ITE) connectives by the probability or size limit and the subsuming to delete subsets could not be directly applied to the intermediate BDD structure under construction. This is the motivation for this work. This paper presents an efficient BDD algorithm for large coherent systems (coherent BDD algorithm) by which the truncation and subsuming could be performed in the progress of the construction of the BDD structure. A set of new formulae developed in this study for AND or OR operation between two ITE connectives of a coherent system makes it possible to delete subsets and truncate ITE connectives with a probability or size limit in the intermediate BDD structure under construction. By means of the truncation and subsuming in every step of the calculation, large fault trees for coherent systems (coherent fault trees) are efficiently solved in a short time using less memory. Furthermore, the coherent BDD algorithm from the aspect of the size of a BDD structure is much less sensitive to variable ordering than the conventional BDD algorithm
Poly(ferrocenylsilane)-block-Polylactide Block Copolymers
Roerdink, M.; van Zanten, Thomas S.; Hempenius, Mark A.; Zhong, Zhiyuan; Feijen, Jan; Vancso, Gyula J.
2007-01-01
A PFS/PLA block copolymer was studied to probe the effect of strong surface interactions on pattern formation in PFS block copolymer thin films. Successful synthesis of PFS-b-PLA was demonstrated. Thin films of these polymers show phase separation to form PFS microdomains in a PLA matrix, and
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Astrocyte truncated-TrkB mediates BDNF antiapoptotic effect leading to neuroprotection.
Saba, Julieta; Turati, Juan; Ramírez, Delia; Carniglia, Lila; Durand, Daniela; Lasaga, Mercedes; Caruso, Carla
2018-05-31
Astrocytes are glial cells that help maintain brain homeostasis and become reactive in neurodegenerative processes releasing both harmful and beneficial factors. We have demonstrated that brain-derived neurotrophic factor (BDNF) expression is induced by melanocortins in astrocytes but BDNF actions in astrocytes are largely unknown. We hypothesize that BDNF may prevent astrocyte death resulting in neuroprotection. We found that BDNF increased astrocyte viability, preventing apoptosis induced by serum deprivation by decreasing active caspase-3 and p53 expression. The antiapoptotic action of BDNF was abolished by ANA-12 (a specific TrkB antagonist) and by K252a (a general Trk antagonist). Astrocytes only express the BDNF receptor TrkB truncated isoform 1, TrkB-T1. BDNF induced ERK, Akt and Src (a non-receptor tyrosine kinase) activation in astrocytes. Blocking ERK and Akt pathways abolished BDNF protection in serum deprivation-induced cell death. Moreover, BDNF protected astrocytes from death by 3-nitropropionic acid (3-NP), an effect also blocked by ANA-12, K252a, and inhibitors of ERK, calcium and Src. BDNF reduced reactive oxygen species (ROS) levels induced in astrocytes by 3-NP and increased xCT expression and glutathione levels. Astrocyte conditioned media (ACM) from untreated astrocytes partially protected PC12 neurons whereas ACM from BDNF-treated astrocytes completely protected PC12 neurons from 3-NP-induced apoptosis. Both ACM from control and BDNF-treated astrocytes markedly reduced ROS levels induced by 3-NP in PC12 cells. Our results demonstrate that BDNF protects astrocytes from cell death through TrkB-T1 signaling, exerts an antioxidant action, and induces release of neuroprotective factors from astrocytes. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Yang, Yuli; Ma, Hao; Aï ssa, Sonia
2012-01-01
In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU's packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU's packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.
Yang, Yuli
2012-11-01
In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU\\'s packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU\\'s packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.
Design and Synthesis of a Series of Truncated Neplanocin Fleximers
Sarah C. Zimmermann
2014-12-01
Full Text Available In an effort to study the effects of flexibility on enzyme recognition and activity, we have developed several different series of flexible nucleoside analogues in which the purine base is split into its respective imidazole and pyrimidine components. The focus of this particular study was to synthesize the truncated neplanocin A fleximers to investigate their potential anti-protozoan activities by inhibition of S-adenosylhomocysteine hydrolase (SAHase. The three fleximers tested displayed poor anti-trypanocidal activities, with EC50 values around 200 μM. Further studies of the corresponding ribose fleximers, most closely related to the natural nucleoside substrates, revealed low affinity for the known T. brucei nucleoside transporters P1 and P2, which may be the reason for the lack of trypanocidal activity observed.
Administering truncated receive functions in a parallel messaging interface
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2014-12-09
Administering truncated receive functions in a parallel messaging interface (`PMI`) of a parallel computer comprising a plurality of compute nodes coupled for data communications through the PMI and through a data communications network, including: sending, through the PMI on a source compute node, a quantity of data from the source compute node to a destination compute node; specifying, by an application on the destination compute node, a portion of the quantity of data to be received by the application on the destination compute node and a portion of the quantity of data to be discarded; receiving, by the PMI on the destination compute node, all of the quantity of data; providing, by the PMI on the destination compute node to the application on the destination compute node, only the portion of the quantity of data to be received by the application; and discarding, by the PMI on the destination compute node, the portion of the quantity of data to be discarded.
Effect of truncated cone roughness element density on hydrodynamic drag
Womack, Kristofer; Schultz, Michael; Meneveau, Charles
2017-11-01
An experimental study was conducted on rough-wall, turbulent boundary layer flow with roughness elements whose idealized shape model barnacles that cause hydrodynamic drag in many applications. Varying planform densities of truncated cone roughness elements were investigated. Element densities studied ranged from 10% to 79%. Detailed turbulent boundary layer velocity statistics were recorded with a two-component LDV system on a three-axis traverse. Hydrodynamic roughness length (z0) and skin-friction coefficient (Cf) were determined and compared with the estimates from existing roughness element drag prediction models including Macdonald et al. (1998) and other recent models. The roughness elements used in this work model idealized barnacles, so implications of this data set for ship powering are considered. This research was supported by the Office of Naval Research and by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.
Pair truncation for rotational nuclei: j=17/2 model
Halse, P.; Jaqua, L.; Barrett, B.R.
1989-01-01
The suitability of the pair condensate approach for rotational states is studied in a single j=17/2 shell of identical nucleons interacting through a quadrupole-quadrupole Hamiltonian. The ground band and a K=2 excited band are both studied in detail. A direct comparison of the exact states with those constituting the SD and SDG subspaces is used to identify the important degrees of freedom for these levels. The range of pairs necessary for a good description is found to be highly state dependent; S and D pairs are the major constituents of the low-spin ground-band levels, while G pairs are needed for those in the γ band. Energy spectra are obtained for each truncated subspace. SDG pairs allow accurate reproduction of the binding energy and K=2 excitation energy, but still give a moment of inertia which is about 30% too small even for the lowest levels
Generalized Truncated Methods for an Efficient Solution of Retrial Systems
Ma Jose Domenech-Benlloch
2008-01-01
Full Text Available We are concerned with the analytic solution of multiserver retrial queues including the impatience phenomenon. As there are not closed-form solutions to these systems, approximate methods are required. We propose two different generalized truncated methods to effectively solve this type of systems. The methods proposed are based on the homogenization of the state space beyond a given number of users in the retrial orbit. We compare the proposed methods with the most well-known methods appeared in the literature in a wide range of scenarios. We conclude that the proposed methods generally outperform previous proposals in terms of accuracy for the most common performance parameters used in retrial systems with a moderated growth in the computational cost.
Developmental regulation of human truncated nerve growth factor receptor
DiStefano, P.S.; Clagett-Dame, M.; Chelsea, D.M.; Loy, R. (Abbott Laboratories, Abbott Park, IL (USA))
1991-01-01
Monoclonal antibodies (designated XIF1 and IIIG5) recognizing distinct epitopes of the human truncated nerve growth factor receptor (NGF-Rt) were used in a two-site radiometric immunosorbent assay to monitor levels of NGF-Rt in human urine as a function of age. Urine samples were collected from 70 neurologically normal subjects ranging in age from 1 month to 68 years. By using this sensitive two-site radiometric immunosorbent assay, NGF-Rt levels were found to be highest in urine from 1-month old subjects. By 2.5 months, NGF-Rt values were half of those seen at 1 month and decreased more gradually between 0.5 and 15 years. Between 15 and 68 years, urine NGF-Rt levels were relatively constant at 5% of 1-month values. No evidence for diurnal variation of adult NGF-Rt was apparent. Pregnant women in their third trimester showed significantly elevated urine NGF-Rt values compared with age-matched normals. Affinity labeling of NGF-Rt with 125I-NGF followed by immunoprecipitation with ME20.4-IgG and gel autoradiography indicated that neonatal urine contained high amounts of truncated receptor (Mr = 50 kd); decreasingly lower amounts of NGF-Rt were observed on gel autoradiograms with development, indicating that the two-site radiometric immunosorbent assay correlated well with the affinity labeling technique for measuring NGF-Rt. NGF-Rt in urines from 1-month-old and 36-year-old subjects showed no differences in affinities for NGF or for the monoclonal antibody IIIG5. These data show that NGF-Rt is developmentally regulated in human urine, and are discussed in relation to the development and maturation of the peripheral nervous system.
Developmental regulation of human truncated nerve growth factor receptor
DiStefano, P.S.; Clagett-Dame, M.; Chelsea, D.M.; Loy, R.
1991-01-01
Monoclonal antibodies (designated XIF1 and IIIG5) recognizing distinct epitopes of the human truncated nerve growth factor receptor (NGF-Rt) were used in a two-site radiometric immunosorbent assay to monitor levels of NGF-Rt in human urine as a function of age. Urine samples were collected from 70 neurologically normal subjects ranging in age from 1 month to 68 years. By using this sensitive two-site radiometric immunosorbent assay, NGF-Rt levels were found to be highest in urine from 1-month old subjects. By 2.5 months, NGF-Rt values were half of those seen at 1 month and decreased more gradually between 0.5 and 15 years. Between 15 and 68 years, urine NGF-Rt levels were relatively constant at 5% of 1-month values. No evidence for diurnal variation of adult NGF-Rt was apparent. Pregnant women in their third trimester showed significantly elevated urine NGF-Rt values compared with age-matched normals. Affinity labeling of NGF-Rt with 125I-NGF followed by immunoprecipitation with ME20.4-IgG and gel autoradiography indicated that neonatal urine contained high amounts of truncated receptor (Mr = 50 kd); decreasingly lower amounts of NGF-Rt were observed on gel autoradiograms with development, indicating that the two-site radiometric immunosorbent assay correlated well with the affinity labeling technique for measuring NGF-Rt. NGF-Rt in urines from 1-month-old and 36-year-old subjects showed no differences in affinities for NGF or for the monoclonal antibody IIIG5. These data show that NGF-Rt is developmentally regulated in human urine, and are discussed in relation to the development and maturation of the peripheral nervous system
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Analysis of the upper-truncated Weibull distribution for wind speed
Kantar, Yeliz Mert; Usta, Ilhan
2015-01-01
Highlights: • Upper-truncated Weibull distribution is proposed to model wind speed. • Upper-truncated Weibull distribution nests Weibull distribution as special case. • Maximum likelihood is the best method for upper-truncated Weibull distribution. • Fitting accuracy of upper-truncated Weibull is analyzed on wind speed data. - Abstract: Accurately modeling wind speed is critical in estimating the wind energy potential of a certain region. In order to model wind speed data smoothly, several statistical distributions have been studied. Truncated distributions are defined as a conditional distribution that results from restricting the domain of statistical distribution and they also cover base distribution. This paper proposes, for the first time, the use of upper-truncated Weibull distribution, in modeling wind speed data and also in estimating wind power density. In addition, a comparison is made between upper-truncated Weibull distribution and well known Weibull distribution using wind speed data measured in various regions of Turkey. The obtained results indicate that upper-truncated Weibull distribution shows better performance than Weibull distribution in estimating wind speed distribution and wind power. Therefore, upper-truncated Weibull distribution can be an alternative for use in the assessment of wind energy potential
Skip Navigation Bar Home Current Issue Past Issues Block That Pain! Past Issues / Fall 2007 Table of ... contrast, most pain relievers used for surgical procedures block activity in all types of neurons. This can ...
... known cause. Causes can include: Left bundle branch block Heart attacks (myocardial infarction) Thickened, stiffened or weakened ... myocarditis) High blood pressure (hypertension) Right bundle branch block A heart abnormality that's present at birth (congenital) — ...
Block Tridiagonal Matrices in Electronic Structure Calculations
Petersen, Dan Erik
in the Landauer–Büttiker ballistic transport regime. These calculations concentrate on determining the so– called Green’s function matrix, or portions thereof, which is the inverse of a block tridiagonal general complex matrix. To this end, a sequential algorithm based on Gaussian elimination named Sweeps...
DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM
TAYSEER S. ATIA
2014-08-01
Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.
Probabilistic Decision Based Block Partitioning for Future Video Coding
Wang, Zhao; Wang, Shiqi; Zhang, Jian; Wang, Shanshe; Ma, Siwei
2017-01-01
, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure
A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems
Sun, Xian-He; Joslin, Ronald D.
1995-01-01
A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.
Wang, Fei; Liu, Junyan; Mohummad, Oliullah; Wang, Yang
2018-04-01
In this paper, truncated-correlation photothermal coherence tomography (TC-PCT) was used as a nondestructive inspection technique to evaluate glass-fiber reinforced polymer (GFRP) composite surface cracks. Chirped-pulsed signal that combines linear frequency modulation and pulse excitation was proposed as an excitation signal to detect GFRP composite surface cracks. The basic principle of TC-PCT and extraction algorithm of the thermal wave signal feature was described. The comparison experiments between lock-in thermography, thermal wave radar imaging and chirped-pulsed photothermal radar for detecting GFRP artificial surface cracks were carried out. Experimental results illustrated that chirped-pulsed photothermal radar has the merits of high signal-to-noise ratio in detecting GFRP composite surface cracks. TC-PCT as a depth-resolved photothermal imaging modality was employed to enable three-dimensional visualization of GFRP composite surface cracks. The results showed that TC-PCT can effectively evaluate the cracks depth of GFRP composite.
Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.
Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J
2018-02-15
Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.
Jönsson, Jeppe
2015-01-01
Block tearing is considered in several codes as a pure block tension or a pure block shear failure mechanism. However in many situations the load acts eccentrically and involves the transfer of a substantial moment in combination with the shear force and perhaps a normal force. A literature study...... shows that no readily available tests with a well-defined substantial eccentricity have been performed. This paper presents theoretical and experimental work leading towards generalized block failure capacity methods. Simple combination of normal force, shear force and moment stress distributions along...... yield lines around the block leads to simple interaction formulas similar to other interaction formulas in the codes....
Games, Dora; Valera, Elvira; Spencer, Brian; Rockenstein, Edward; Mante, Michael; Adame, Anthony; Patrick, Christina; Ubhi, Kiren; Nuber, Silke; Sacayon, Patricia; Zago, Wagner; Seubert, Peter; Barbour, Robin; Schenk, Dale; Masliah, Eliezer
2014-07-09
Parkinson's disease (PD) and dementia with Lewy bodies (DLB) are common neurodegenerative disorders of the aging population, characterized by progressive and abnormal accumulation of α-synuclein (α-syn). Recent studies have shown that C-terminus (CT) truncation and propagation of α-syn play a role in the pathogenesis of PD/DLB. Therefore, we explored the effect of passive immunization against the CT of α-syn in the mThy1-α-syn transgenic (tg) mouse model, which resembles the striato-nigral and motor deficits of PD. Mice were immunized with the new monoclonal antibodies 1H7, 5C1, or 5D12, all directed against the CT of α-syn. CT α-syn antibodies attenuated synaptic and axonal pathology, reduced the accumulation of CT-truncated α-syn (CT-α-syn) in axons, rescued the loss of tyrosine hydroxylase fibers in striatum, and improved motor and memory deficits. Among them, 1H7 and 5C1 were most effective at decreasing levels of CT-α-syn and higher-molecular-weight aggregates. Furthermore, in vitro studies showed that preincubation of recombinant α-syn with 1H7 and 5C1 prevented CT cleavage of α-syn. In a cell-based system, CT antibodies reduced cell-to-cell propagation of full-length α-syn, but not of the CT-α-syn that lacked the 118-126 aa recognition site needed for antibody binding. Furthermore, the results obtained after lentiviral expression of α-syn suggest that antibodies might be blocking the extracellular truncation of α-syn by calpain-1. Together, these results demonstrate that antibodies against the CT of α-syn reduce levels of CT-truncated fragments of the protein and its propagation, thus ameliorating PD-like pathology and improving behavioral and motor functions in a mouse model of this disease. Copyright © 2014 the authors 0270-6474/14/349441-14$15.00/0.
Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt; Fill, Thomas
2018-01-01
NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. NASA is also currently designing the next evolution of SLS, the Block-1B. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm (of Space Shuttle heritage) for closed loop guidance. To accommodate vehicle capabilities and design for future evolutions of SLS, modifications were made to PEG for Block-1 to handle multi-phase burns, provide PEG updated propulsion information, and react to a core stage engine out. In addition, due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS) and EUS carrying out Lunar Vicinity and Earth Escape missions, certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions to account for long burn arcs and target translunar and hyperbolic orbits. This paper describes the design and implementation of modifications to the Block-1 PEG algorithm as compared to Space Shuttle. Furthermore, this paper illustrates challenges posed by the Block-1B vehicle and the required PEG enhancements. These improvements make PEG capable for use on the SLS Block-1B vehicle as part of the Guidance, Navigation, and Control (GN&C) System.
Dai, Chaoqing; Wang, Xiaogang; Zhou, Guoquan; Chen, Junlang
2014-01-01
An image-hiding method based on the optical interference principle and partial-phase-truncation in the fractional Fourier domain is proposed. The primary image is converted into three phase-only masks (POMs) using an analytical algorithm involved partial-phase-truncation and a fast random pixel exchange process. A procedure of a fake silhouette for a decryption key is suggested to reinforce the encryption and give a hint of the position of the key. The fractional orders of FrFT effectively enhance the security of the system. In the decryption process, the POM with false information and the other two POMs are, respectively, placed in the input and fractional Fourier planes to recover the primary image. There are no unintended information disclosures and iterative computations involved in the proposed method. Simulation results are presented to verify the validity of the proposed approach. (letters)
An algorithm for symplectic implicit Taylor-map tracking
Yan, Y.; Channell, P.; Syphers, M.
1992-10-01
An algorithm has been developed for converting an ''order-by-order symplectic'' Taylor map that is truncated to an arbitrary order (thus not exactly symplectic) into a Courant-Snyder matrix and a symplectic implicit Taylor map for symplectic tracking. This algorithm is implemented using differential algebras, and it is numerically stable and fast. Thus, lifetime charged-particle tracking for large hadron colliders, such as the Superconducting Super Collider, is now made possible
Dimensioning of multiservice links taking account of soft blocking
Iversen, Villy Bæk; Stepanov, S.N.; Kostrov, A.V.
2006-01-01
of a multiservice link taking into account the possibility of soft blocking. An approximate algorithm for estimation of main performance measures is constructed. The error of estimation is numerically studied for different types of soft blocking. The optimal procedure of dimensioning is suggested....
Impact of degree truncation on the spread of a contagious process on networks.
Harling, Guy; Onnela, Jukka-Pekka
2018-03-01
Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue or study design, e.g., fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Past research has shown how FCD truncation affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and also used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes propagated on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.
Estimation of Panel Data Regression Models with Two-Sided Censoring or Truncation
Alan, Sule; Honore, Bo E.; Hu, Luojia
2014-01-01
This paper constructs estimators for panel data regression models with individual speci…fic heterogeneity and two–sided censoring and truncation. Following Powell (1986) the estimation strategy is based on moment conditions constructed from re–censored or re–truncated residuals. While these moment...
Inference for shared-frailty survival models with left-truncated data
van den Berg, G.J.; Drepper, B.
2016-01-01
Shared-frailty survival models specify that systematic unobserved determinants of duration outcomes are identical within groups of individuals. We consider random-effects likelihood-based statistical inference if the duration data are subject to left-truncation. Such inference with left-truncated
On truncated Taylor series and the position of their spurious zeros
Christiansen, Søren; Madsen, Per A.
2006-01-01
A truncated Taylor series, or a Taylor polynomial, which may appear when treating the motion of gravity water waves, is obtained by truncating an infinite Taylor series for a complex, analytical function. For such a polynomial the position of the complex zeros is considered in case the Taylor...
A Lynden-Bell integral estimator for the tail index of right-truncated ...
By means of a Lynden-Bell integral with deterministic threshold, Worms and Worms [A Lynden-Bell integral estimator for extremes of randomly truncated data. Statist. Probab. Lett. 2016; 109: 106-117] recently introduced an asymptotically normal estimator of the tail index for randomly right-truncated Pareto-type data.
Resonant Excitation of a Truncated Metamaterial Cylindrical Shell by a Thin Wire Monopole
Kim, Oleksiy S.; Erentok, Aycan; Breinbjerg, Olav
2009-01-01
A truncated metamaterial cylindrical shell excited by a thin wire monopole is investigated using the integral equation technique as well as the finite element method. Simulations reveal a strong field singularity at the edge of the truncated cylindrical shell, which critically affects the matching...
Immature truncated O-glycophenotype of cancer directly induces oncogenic features
Radhakrishnan, Prakash; Dabelsteen, Sally; Madsen, Frey Brus
2014-01-01
Aberrant expression of immature truncated O-glycans is a characteristic feature observed on virtually all epithelial cancer cells, and a very high frequency is observed in early epithelial premalignant lesions that precede the development of adenocarcinomas. Expression of the truncated O-glycan s...
Bounded real and positive real balanced truncation using Σ-normalised coprime factors
Trentelman, H.L.
2009-01-01
In this article, we will extend the method of balanced truncation using normalised right coprime factors of the system transfer matrix to balanced truncation with preservation of half line dissipativity. Special cases are preservation of positive realness and bounded realness. We consider a half
Space Launch Systems Block 1B Preliminary Navigation System Design
Oliver, T. Emerson; Park, Thomas; Anzalone, Evan; Smith, Austin; Strickland, Dennis; Patrick, Sean
2018-01-01
NASA is currently building the Space Launch Systems (SLS) Block 1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. In parallel, NASA is also designing the Block 1B launch vehicle. The Block 1B vehicle is an evolution of the Block 1 vehicle and extends the capability of the NASA launch vehicle. This evolution replaces the Interim Cryogenic Propulsive Stage (ICPS) with the Exploration Upper Stage (EUS). As the vehicle evolves to provide greater lift capability, increased robustness for manned missions, and the capability to execute more demanding missions so must the SLS Integrated Navigation System evolved to support those missions. This paper describes the preliminary navigation systems design for the SLS Block 1B vehicle. The evolution of the navigation hard-ware and algorithms from an inertial-only navigation system for Block 1 ascent flight to a tightly coupled GPS-aided inertial navigation system for Block 1B is described. The Block 1 GN&C system has been designed to meet a LEO insertion target with a specified accuracy. The Block 1B vehicle navigation system is de-signed to support the Block 1 LEO target accuracy as well as trans-lunar or trans-planetary injection accuracy. Additionally, the Block 1B vehicle is designed to support human exploration and thus is designed to minimize the probability of Loss of Crew (LOC) through high-quality inertial instruments and robust algorithm design, including Fault Detection, Isolation, and Recovery (FDIR) logic.
Blocked Randomization with Randomly Selected Block Sizes
Jimmy Efird
2010-12-01
Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.
31 CFR 595.301 - Blocked account; blocked property.
2010-07-01
... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY TERRORISM SANCTIONS REGULATIONS General Definitions § 595.301 Blocked account; blocked property. The terms blocked account and blocked...
Repair for scattering expansion truncation errors in transport calculations
Emmett, M.B.; Childs, R.L.; Rhoades, W.A.
1980-01-01
Legendre expansion of angular scattering distributions is usually limited to P 3 in practical transport calculations. This truncation often results in non-trivial errors, especially alternating negative and positive lateral scattering peaks. The effect is especially prominent in forward-peaked situations such as the within-group component of the Compton Scattering of gammas. Increasing the expansion to P 7 often makes the peaks larger and narrower. Ward demonstrated an accurate repair, but his method requires special cross section sets and codes. The DOT IV code provides fully-compatible, but heuristic, repair of the erroneous scattering. An analytical Klein-Nishina estimator, newly available in the MORSE code, allows a test of this method. In the MORSE calculation, particle scattering histories are calculated in the usual way, with scoring by an estimator routine at each collision site. Results for both the conventional P 3 estimator and the analytical estimator were obtained. In the DOT calculation, the source moments are expanded into the directional representation at each iteration. Optionally a sorting procedure removes all negatives, and removes enough small positive values to restore particle conservation. The effect of this is to replace the alternating positive and negative values with positive values of plausible magnitude. The accuracy of those values is examined herein
Influence of miscut on crystal truncation rod scattering
Munkholm, A.; Brennan, S.
1999-01-01
X-rays can be used to measure the roughness of a surface by the study of crystal truncation rod scattering. It is shown that for a simple cubic lattice the presence of a miscut surface with a regular step array has no effect on the scattered intensity of a single rod and that a distribution of terrace widths on the surface is shown to have the same effect as adding roughness to the surface. For a perfect crystal without miscut, the scattered intensity is the sum of the intensity from all the rods with the same in-plane momentum transfer. For all real crystals, the scattered intensity is better described as that from a single rod. It is shown that data-collection strategies must correctly account for the sample miscut or there is a potential for improperly measuring the rod intensity. This can result in an asymmetry in the rod intensity above and below the Bragg peak, which can be misinterpreted as being due to a relaxation of the surface. The calculations presented here are compared with data for silicon (001) wafers with 0.1 and 4 miscuts. (orig.)
Weakly nonlinear sloshing in a truncated circular conical tank
Gavrilyuk, I P; Hermann, M; Lukovsky, I A; Solodun, O V; Timokha, A N
2013-01-01
Sloshing of an ideal incompressible liquid in a rigid truncated (tapered) conical tank is considered when the tank performs small-magnitude oscillatory motions with the forcing frequency close to the lowest natural sloshing frequency. The multimodal method, the non-conformal mapping technique and the Moiseev type asymptotics are employed to derive a finite-dimensional system of weakly nonlinear ordinary differential (modal) equations. This modal system is a generalization of that by Gavrilyuk et al 2005 Fluid Dyn. Res. 37 399–429. Using the derived modal equations, we classify the resonant steady-state wave regimes occurring due to horizontal harmonic tank excitations. The frequency ranges are detected where the ‘planar’ and/or ‘swirling’ steady-state sloshing are stable as well as a range in which all steady-state wave regimes are not stable and irregular (chaotic) liquid motions occur is established. The results on the frequency ranges are qualitatively supported by experiments by Matta E 2002 PhD Thesis Politecnico di Torino, Torino. (paper)
Adaptive designs based on the truncated product method
Neuhäuser Markus
2005-09-01
Full Text Available Abstract Background Adaptive designs are becoming increasingly important in clinical research. One approach subdivides the study into several (two or more stages and combines the p-values of the different stages using Fisher's combination test. Methods Alternatively to Fisher's test, the recently proposed truncated product method (TPM can be applied to combine the p-values. The TPM uses the product of only those p-values that do not exceed some fixed cut-off value. Here, these two competing analyses are compared. Results When an early termination due to insufficient effects is not appropriate, such as in dose-response analyses, the probability to stop the trial early with the rejection of the null hypothesis is increased when the TPM is applied. Therefore, the expected total sample size is decreased. This decrease in the sample size is not connected with a loss in power. The TPM turns out to be less advantageous, when an early termination of the study due to insufficient effects is possible. This is due to a decrease of the probability to stop the trial early. Conclusion It is recommended to apply the TPM rather than Fisher's combination test whenever an early termination due to insufficient effects is not suitable within the adaptive design.
Consistent Kaluza-Klein truncations via exceptional field theory
Hohm, Olaf [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States); Samtleben, Henning [Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS,École Normale Supérieure de Lyon, 46, allée d’Italie, F-69364 Lyon cedex 07 (France)
2015-01-26
We present the generalized Scherk-Schwarz reduction ansatz for the full supersymmetric exceptional field theory in terms of group valued twist matrices subject to consistency equations. With this ansatz the field equations precisely reduce to those of lower-dimensional gauged supergravity parametrized by an embedding tensor. We explicitly construct a family of twist matrices as solutions of the consistency equations. They induce gauged supergravities with gauge groups SO(p,q) and CSO(p,q,r). Geometrically, they describe compactifications on internal spaces given by spheres and (warped) hyperboloides H{sup p,q}, thus extending the applicability of generalized Scherk-Schwarz reductions beyond homogeneous spaces. Together with the dictionary that relates exceptional field theory to D=11 and IIB supergravity, respectively, the construction defines an entire new family of consistent truncations of the original theories. These include not only compactifications on spheres of different dimensions (such as AdS{sub 5}×S{sup 5}), but also various hyperboloid compactifications giving rise to a higher-dimensional embedding of supergravities with non-compact and non-semisimple gauge groups.
Proteolysis of truncated hemolysin A yields a stable dimerization interface
Novak, Walter R.P.; Bhattacharyya, Basudeb; Grilley, Daniel P.; Weaver, Todd M. (Wabash); (UW)
2017-02-21
Wild-type and variant forms of HpmA265 (truncated hemolysin A) from
Quantum Computations: Fundamentals and Algorithms
Duplij, S.A.; Shapoval, I.I.
2007-01-01
Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described
Scalable inference for stochastic block models
Peng, Chengbin
2017-12-08
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.
Miolane, Charlotte Vikkelsø
ensurethat no attack violatesthe securitybounds specifiedbygeneric attack namely exhaustivekey search and table lookup attacks. This thesis contains a general introduction to cryptography with focus on block ciphers and important block cipher designs, in particular the Advanced Encryption Standard(AES...... on small scale variants of AES. In the final part of the thesis we present a new block cipher proposal Present and examine its security against algebraic and differential cryptanalysis in particular....
3D Reasoning from Blocks to Stability.
Zhaoyin Jia; Gallagher, Andrew C; Saxena, Ashutosh; Chen, Tsuhan
2015-05-01
Objects occupy physical space and obey physical laws. To truly understand a scene, we must reason about the space that objects in it occupy, and how each objects is supported stably by each other. In other words, we seek to understand which objects would, if moved, cause other objects to fall. This 3D volumetric reasoning is important for many scene understanding tasks, ranging from segmentation of objects to perception of a rich 3D, physically well-founded, interpretations of the scene. In this paper, we propose a new algorithm to parse a single RGB-D image with 3D block units while jointly reasoning about the segments, volumes, supporting relationships, and object stability. Our algorithm is based on the intuition that a good 3D representation of the scene is one that fits the depth data well, and is a stable, self-supporting arrangement of objects (i.e., one that does not topple). We design an energy function for representing the quality of the block representation based on these properties. Our algorithm fits 3D blocks to the depth values corresponding to image segments, and iteratively optimizes the energy function. Our proposed algorithm is the first to consider stability of objects in complex arrangements for reasoning about the underlying structure of the scene. Experimental results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.
Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox
2015-01-01
A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186
Chi-Jim Chen
2015-03-01
Full Text Available A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS, successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM, based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates.
Improved Motion Estimation Using Early Zero-Block Detection
Y. Lin
2008-07-01
Full Text Available We incorporate the early zero-block detection technique into the UMHexagonS algorithm, which has already been adopted in H.264/AVC JM reference software, to speed up the motion estimation process. A nearly sufficient condition is derived for early zero-block detection. Although the conventional early zero-block detection method can achieve significant improvement in computation reduction, the PSNR loss, to whatever extent, is not negligible especially for high quantization parameter (QP or low bit-rate coding. This paper modifies the UMHexagonS algorithm with the early zero-block detection technique to improve its coding performance. The experimental results reveal that the improved UMHexagonS algorithm greatly reduces computation while maintaining very high coding efficiency.
Van der Vegt, Wim
2010-01-01
Related Drupal Nodes Block This module exposes a block that uses Latent Semantic Analysis (Lsa) internally to suggest three nodes that are relevant to the node a user is viewing. This module performs three tasks. 1) It periodically indexes a Drupal site and generates a Lsa Term Document Matrix.
Dickson, Thomas
2002-01-01
Artiklen indleder med: ved siden aaf Londons etablerede designmesse '100% Design', er der vokset et undergrundsmiljø af designudstillinger op. Det dominerende og mest kendte initiativ er Designers Block, der i år udstillede to steder i byen. Designers Block er et mere uformelt udstillingsforum...
The mixing evolutionary algorithm : indepedent selection and allocation of trials
C.H.M. van Kemenade
1997-01-01
textabstractWhen using an evolutionary algorithm to solve a problem involving building blocks we have to grow the building blocks and then mix these building blocks to obtain the (optimal) solution. Finding a good balance between the growing and the mixing process is a prerequisite to get a reliable
Truncation of CPC solar collectors and its effect on energy collection
Carvalho, M. J.; Collares-Pereira, M.; Gordon, J. M.; Rabl, A.
1985-01-01
Analytic expressions are derived for the angular acceptance function of two-dimensional compound parabolic concentrator solar collectors (CPC's) of arbitrary degree of truncation. Taking into account the effect of truncation on both optical and thermal losses in real collectors, the increase in monthly and yearly collectible energy is also evaluated. Prior analyses that have ignored the correct behavior of the angular acceptance function at large angles for truncated collectors are shown to be in error by 0-2 percent in calculations of yearly collectible energy for stationary collectors.
Yan, Y.T.
1996-11-01
A brief review of the Zlib development is given. Emphasized is the Zlib nerve system which uses the One-Step Index Pointers (OSIPs) for efficient computation and flexible use of the Truncated Power Series Algebra (TPSA). Also emphasized is the treatment of parameterized maps with an object-oriented language (e.g. C++). A parameterized map can be a Vector Power Series (Vps) or a Lie generator represented by an exponent of a Truncated Power Series (Tps) of which each coefficient is an object of truncated power series
A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains
Kemin Wang
2014-01-01
Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.
Tosi, E.; Ruti, P.; Tibaldi, S.; D'Andrea, F.
1994-01-01
Tibaldi and Molteni (1990, hereafter referred to as TM) had previously investigated operational blocking predictability by the ECMWF model and the possible relationships between model systematic error and blocking in the winter season of the Northern Hemisphere, using seven years of ECMWF operational archives of analyses and day 1 to 10 forecasts. They showed that fewer blocking episodes than in the real atmosphere were generally simulated by the model, and that this deficiency increased with increasing forecast time. As a consequence of this, a major contribution to the systematic error in the winter season was shown to derive from the inability of the model to properly forecast blocking. In this study, the analysis performed in TM for the first seven winter seasons of the ECMWF operational model is extended to the subsequent five winters, during which model development, reflecting both resolution increases and parametrisation modifications, continued unabated. In addition the objective blocking index developed by TM has been applied to the observed data to study the natural low frequency variability of blocking. The ability to simulate blocking of some climate models has also been tested
Fox, Christopher; Romeijn, H. Edwin; Dempsey, James F.
2006-01-01
We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique
James, Andrew J A; Konik, Robert M; Lecheminant, Philippe; Robinson, Neil J; Tsvelik, Alexei M
2018-02-26
We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1 + 1D quantum chromodynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.
James, Andrew J. A.; Konik, Robert M.; Lecheminant, Philippe; Robinson, Neil J.; Tsvelik, Alexei M.
2018-04-01
We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb–Liniger model, 1 + 1D quantum chromodynamics, as well as Landau–Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.
A generalized right truncated bivariate Poisson regression model with applications to health data.
Islam, M Ataharul; Chowdhury, Rafiqul I
2017-01-01
A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.
Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography
Borg, Leise; Jørgensen, Jakob Sauer; Frikel, Jürgen
2017-01-01
Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered...... and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray...... backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance...
Propagation of a general-type beam through a truncated fractional Fourier transform optical system.
Zhao, Chengliang; Cai, Yangjian
2010-03-01
Paraxial propagation of a general-type beam through a truncated fractional Fourier transform (FRT) optical system is investigated. Analytical formulas for the electric field and effective beam width of a general-type beam in the FRT plane are derived based on the Collins formula. Our formulas can be used to study the propagation of a variety of laser beams--such as Gaussian, cos-Gaussian, cosh-Gaussian, sine-Gaussian, sinh-Gaussian, flat-topped, Hermite-cosh-Gaussian, Hermite-sine-Gaussian, higher-order annular Gaussian, Hermite-sinh-Gaussian and Hermite-cos-Gaussian beams--through a FRT optical system with or without truncation. The propagation properties of a Hermite-cos-Gaussian beam passing through a rectangularly truncated FRT optical system are studied as a numerical example. Our results clearly show that the truncated FRT optical system provides a convenient way for laser beam shaping.
truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models
Maria Karlsson
2014-05-01
Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.
A block variant of the GMRES method on massively parallel processors
Li, Guangye [Cray Research, Inc., Eagan, MN (United States)
1996-12-31
This paper presents a block variant of the GMRES method for solving general unsymmetric linear systems. This algorithm generates a transformed Hessenberg matrix by solely using block matrix operations and block data communications. It is shown that this algorithm with block size s, denoted by BVGMRES(s,m), is theoretically equivalent to the GMRES(s*m) method. The numerical results show that this algorithm can be more efficient than the standard GMRES method on a cache based single CPU computer with optimized BLAS kernels. Furthermore, the gain in efficiency is more significant on MPPs due to both efficient block operations and efficient block data communications. Our numerical results also show that in comparison to the standard GMRES method, the more PEs that are used on an MPP, the more efficient the BVGMRES(s,m) algorithm is.
Randomized Block Cubic Newton Method
Doikov, Nikita; Richtarik, Peter
2018-01-01
We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\cal O}(1/\\epsilon)$, ${\\cal O}(1/\\sqrt{\\epsilon})$ and ${\\cal O}(\\log (1/\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.
Randomized Block Cubic Newton Method
Doikov, Nikita
2018-02-12
We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\\\cal O}(1/\\\\epsilon)$, ${\\\\cal O}(1/\\\\sqrt{\\\\epsilon})$ and ${\\\\cal O}(\\\\log (1/\\\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.
A Novel Image Encryption Algorithm Based on DNA Subsequence Operation
Qiang Zhang
2012-01-01
Full Text Available We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc. combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.
CHENG Shi-lun; YANG Zhen
2008-01-01
To maximize throughput and to satisfy users' requirements in cognitive radios, a cross-layer optimization problem combining adaptive modulation and power control at the physical layer and truncated automatic repeat request at the medium access control layer is proposed. Simulation results show the combination of power control, adaptive modulation, and truncated automatic repeat request can regulate transmitter powers and increase the total throughput effectively.
Truncation scheme of time-dependent density-matrix approach II
Tohyama, Mitsuru [Kyorin University School of Medicine, Mitaka, Tokyo (Japan); Schuck, Peter [Institut de Physique Nucleaire, IN2P3-CNRS, Universite Paris-Sud, Orsay (France); Laboratoire de Physique et de Modelisation des Milieux Condenses, CNRS et Universite Joseph Fourier, Grenoble (France)
2017-09-15
A truncation scheme of the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy for reduced density matrices, where a three-body density matrix is approximated by two-body density matrices, is improved to take into account a normalization effect. The truncation scheme is tested for the Lipkin model. It is shown that the obtained results are in good agreement with the exact solutions. (orig.)
Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography
Borg, Leise; Jørgensen, Jakob S.; Frikel, Jürgen; Sporring, Jon
2017-12-01
Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance to sample is derived and verified numerically and with experimental data. The model accurately describes the arising variable-truncation artifacts across simulated variations of the experimental setup. Three variable-truncation artifact-reduction methods are proposed, all aimed at addressing sinogram discontinuities that are shown to be the source of the streaks. The ‘reduction to limited angle’ (RLA) method simply keeps only non-truncated projections; the ‘detector-directed smoothing’ (DDS) method smooths the discontinuities; while the ‘reflexive boundary condition’ (RBC) method enforces a zero derivative at the discontinuities. Experimental results using both simulated and real data show that the proposed methods effectively reduce variable-truncation artifacts. The RBC method is found to provide the best artifact reduction and preservation of image features using both visual and quantitative assessment. The analysis and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray tomography experiments.
Workman, R. L.; Tiator, L.; Wunderlich, Y.; Doring, M.; Haberzettl, H.
2017-01-01
Here, we compare the methods of amplitude reconstruction, for a complete experiment and a truncated partial-wave analysis, applied to the photoproduction of pseudoscalar mesons. The approach is pedagogical, showing in detail how the amplitude reconstruction (observables measured at a single energy and angle) is related to a truncated partial-wave analysis (observables measured at a single energy and a number of angles).
Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)
2015-05-15
Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.
Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y.; Pullepu, Babuji
2015-01-01
Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.
Fatigue evaluation algorithms: Review
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
31 CFR 594.301 - Blocked account; blocked property.
2010-07-01
... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY GLOBAL TERRORISM SANCTIONS REGULATIONS General Definitions § 594.301 Blocked account; blocked property. The terms blocked account and...
Tompkins, Gail E.; Camp, Donna J.
1988-01-01
Describes four prewriting techniques that elementary and middle grade students can use to gather and organize ideas for writing, and by so doing, cure writer's block. Techniques discussed are: (1) brainstorming; (2) clustering; (3) freewriting; and (4) cubing.
Block copolymer battery separator
Wong, David; Balsara, Nitash Pervez
2016-04-26
The invention herein described is the use of a block copolymer/homopolymer blend for creating nanoporous materials for transport applications. Specifically, this is demonstrated by using the block copolymer poly(styrene-block-ethylene-block-styrene) (SES) and blending it with homopolymer polystyrene (PS). After blending the polymers, a film is cast, and the film is submerged in tetrahydrofuran, which removes the PS. This creates a nanoporous polymer film, whereby the holes are lined with PS. Control of morphology of the system is achieved by manipulating the amount of PS added and the relative size of the PS added. The porous nature of these films was demonstrated by measuring the ionic conductivity in a traditional battery electrolyte, 1M LiPF.sub.6 in EC/DEC (1:1 v/v) using AC impedance spectroscopy and comparing these results to commercially available battery separators.
Stability Analysis of Periodic Systems by Truncated Point Mappings
Guttalu, R. S.; Flashner, H.
1996-01-01
An approach is presented deriving analytical stability and bifurcation conditions for systems with periodically varying coefficients. The method is based on a point mapping(period to period mapping) representation of the system's dynamics. An algorithm is employed to obtain an analytical expression for the point mapping and its dependence on the system's parameters. The algorithm is devised to derive the coefficients of a multinominal expansion of the point mapping up to an arbitrary order in terms of the state variables and of the parameters. Analytical stability and bifurcation condition are then formulated and expressed as functional relations between the parameters. To demonstrate the application of the method, the parametric stability of Mathieu's equation and of a two-degree of freedom system are investigated. The results obtained by the proposed approach are compared to those obtained by perturbation analysis and by direct integration which we considered to the "exact solution". It is shown that, unlike perturbation analysis, the proposed method provides very accurate solution even for large valuesof the parameters. If an expansion of the point mapping in terms of a small parameter is performed the method is equivalent to perturbation analysis. Moreover, it is demonstrated that the method can be easily applied to multiple-degree-of-freedom systems using the same framework. This feature is an important advantage since most of the existing analysis methods apply mainly to single-degree-of-freedom systems and their extension to higher dimensions is difficult and computationally cumbersome.
Bott, Lewis; Hoffman, Aaron B.; Murphy, Gregory L.
2007-01-01
Many theories of category learning assume that learning is driven by a need to minimize classification error. When there is no classification error, therefore, learning of individual features should be negligible. We tested this hypothesis by conducting three category learning experiments adapted from an associative learning blocking paradigm. Contrary to an error-driven account of learning, participants learned a wide range of information when they learned about categories, and blocking effe...
Andrea Caliciotti
2018-04-01
Full Text Available In this paper, we report data and experiments related to the research article entitled “An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization” by Caliciotti et al. [1]. In particular, in Caliciotti et al. [1], large scale unconstrained optimization problems are considered by applying linesearch-based truncated Newton methods. In this framework, a key point is the reduction of the number of inner iterations needed, at each outer iteration, to approximately solving the Newton equation. A novel adaptive truncation criterion is introduced in Caliciotti et al. [1] to this aim. Here, we report the details concerning numerical experiences over a commonly used test set, namely CUTEst (Gould et al., 2015 [2]. Moreover, comparisons are reported in terms of performance profiles (Dolan and Moré, 2002 [3], adopting different parameters settings. Finally, our linesearch-based scheme is compared with a renowned trust region method, namely TRON (Lin and Moré, 1999 [4].
Designing algorithms using CAD technologies
Alin IORDACHE
2008-01-01
Full Text Available A representative example of eLearning-platform modular application, Ã¢Â€Â˜Logical diagramsÃ¢Â€Â™, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.
Adaptive block online learning target tracking based on super pixel segmentation
Cheng, Yue; Li, Jianzeng
2018-04-01
Video target tracking technology under the unremitting exploration of predecessors has made big progress, but there are still lots of problems not solved. This paper proposed a new algorithm of target tracking based on image segmentation technology. Firstly we divide the selected region using simple linear iterative clustering (SLIC) algorithm, after that, we block the area with the improved density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm. Each sub-block independently trained classifier and tracked, then the algorithm ignore the failed tracking sub-block while reintegrate the rest of the sub-blocks into tracking box to complete the target tracking. The experimental results show that our algorithm can work effectively under occlusion interference, rotation change, scale change and many other problems in target tracking compared with the current mainstream algorithms.
WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations
Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi
We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
A one-time truncate and encode multiresolution stochastic framework
Abgrall, R.; Congedo, P.M.; Geraci, G., E-mail: gianluca.geraci@inria.fr
2014-01-15
In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.
Lampón, Natalia; Tutor-Crespo, María J; Romero, Rafael; Tutor, José C
2011-07-01
Recently, the use of the truncated area under the curve from 0 to 2 h (AUC(0-2)) of mycophenolic acid (MPA) has been proposed for therapeutic monitoring in liver transplant recipients. The aim of our study was the evaluation of the clinical usefulness of truncated AUC(0-2) in kidney transplant patients. Plasma MPA was measured in samples taken before the morning dose of mycophenolate mofetil, and one-half and 2 h post-dose, completing 63 MPA concentration-time profiles from 40 adult kidney transplant recipients. The AUC from 0 to 12 h (AUC(0-12)) was calculated using the validated algorithm of Pawinski et al. The truncated AUC(0-2) was calculated using the linear trapezoidal rule, and extrapolated to 0-12 h (trapezoidal extrapolated AUC(0-12)) as previously described. Algorithm calculated and trapezoidal extrapolated AUC(0-12) values showed high correlation (r=0.995) and acceptable dispersion (ma68=0.71 μg·h/mL), median prediction error (6.6%) and median absolute prediction error (12.6%). The truncated AUC(0-2) had acceptable diagnostic efficiency (87%) in the classification of subtherapeutic, therapeutic or supratherapeutic values with respect to AUC(0-12). However, due to the high inter-individual variation of the drug absorption-rate, the dispersion between both pharmacokinetic variables (ma68=6.9 μg·h/mL) was unacceptable. The substantial dispersion between truncated AUC(0-2) and AUC(0-12) values may be a serious objection for the routine use of MPA AUC(0-2) in clinical practice.
Modified Truncated Multiplicity Analysis to Improve Verification of Uranium Fuel Cycle Materials
LaFleur, A.; Miller, K.; Swinhoe, M.; Belian, A.; Croft, S.
2015-01-01
Accurate verification of 235U enrichment and mass in UF6 storage cylinders and the UO2F2 holdup contained in the process equipment is needed to improve international safeguards and nuclear material accountancy at uranium enrichment plants. Small UF6 cylinders (1.5'' and 5'' diameter) are used to store the full range of enrichments from depleted to highly-enriched UF6. For independent verification of these materials, it is essential that the 235U mass and enrichment measurements do not rely on facility operator declarations. Furthermore, in order to be deployed by IAEA inspectors to detect undeclared activities (e.g., during complementary access), it is also imperative that the measurement technique is quick, portable, and sensitive to a broad range of 235U masses. Truncated multiplicity analysis is a technique that reduces the variance in the measured count rates by only considering moments 1, 2, and 3 of the multiplicity distribution. This is especially important for reducing the uncertainty in the measured doubles and triples rates in environments with a high cosmic ray background relative to the uranium signal strength. However, we believe that the existing truncated multiplicity analysis throws away too much useful data by truncating the distribution after the third moment. This paper describes a modified truncated multiplicity analysis method that determines the optimal moment to truncate the multiplicity distribution based on the measured data. Experimental measurements of small UF6 cylinders and UO2F2 working reference materials were performed at Los Alamos National Laboratory (LANL). The data were analyzed using traditional and modified truncated multiplicity analysis to determine the optimal moment to truncate the multiplicity distribution to minimize the uncertainty in the measured count rates. The results from this analysis directly support nuclear safeguards at enrichment plants and provide a more accurate verification method for UF6
Flow equation of quantum Einstein gravity in a higher-derivative truncation
Lauscher, O.; Reuter, M.
2002-01-01
Motivated by recent evidence indicating that quantum Einstein gravity (QEG) might be nonperturbatively renormalizable, the exact renormalization group equation of QEG is evaluated in a truncation of theory space which generalizes the Einstein-Hilbert truncation by the inclusion of a higher-derivative term (R 2 ). The beta functions describing the renormalization group flow of the cosmological constant, Newton's constant, and the R 2 coupling are computed explicitly. The fixed point properties of the 3-dimensional flow are investigated, and they are confronted with those of the 2-dimensional Einstein-Hilbert flow. The non-Gaussian fixed point predicted by the latter is found to generalize to a fixed point on the enlarged theory space. In order to test the reliability of the R 2 truncation near this fixed point we analyze the residual scheme dependence of various universal quantities; it turns out to be very weak. The two truncations are compared in detail, and their numerical predictions are found to agree with a surprisingly high precision. Because of the consistency of the results it appears increasingly unlikely that the non-Gaussian fixed point is an artifact of the truncation. If it is present in the exact theory QEG is probably nonperturbatively renormalizable and ''asymptotically safe.'' We discuss how the conformal factor problem of Euclidean gravity manifests itself in the exact renormalization group approach and show that, in the R 2 truncation, the investigation of the fixed point is not afflicted with this problem. Also the Gaussian fixed point of the Einstein-Hilbert truncation is analyzed; it turns out that it does not generalize to a corresponding fixed point on the enlarged theory space
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Uniaxial backfill block compaction
Koskinen, V.
2012-05-01
The main parts of the project were: to make a literature survey of the previous uniaxial compaction experiments; do uniaxial compaction tests in laboratory scale; and do industrial scale production tests. Object of the project was to sort out the different factors affecting the quality assurance chain of the backfill block uniaxial production and solve a material sticking to mould problem which appeared during manufacturing the blocks of bentonite and cruched rock mixture. The effect of mineralogical and chemical composition on the long term functionality of the backfill was excluded from the project. However, the used smectite-rich clays have been tested for mineralogical consistency. These tests were done in B and Tech OY according their SOPs. The objective of the Laboratory scale tests was to find right material- and compaction parameters for the industrial scale tests. Direct comparison between the laboratory scale tests and industrial scale tests is not possible because the mould geometry and compaction speed has a big influence for the compaction process. For this reason the selected material parameters were also affected by the previous compaction experiments. The industrial scale tests were done in summer of 2010 in southern Sweden. Blocks were done with uniaxial compaction. A 40 tons of the mixture of bentonite and crushed rock blocks and almost 50 tons of Friedland-clay blocks were compacted. (orig.)
Impression block with orientator
Brilin, V I; Ulyanova, O S
2015-01-01
Tool review, namely the impression block, applied to check the shape and size of the top of fish as well as to determine the appropriate tool for fishing operation was realized. For multiple application and obtaining of the impress depth of 3 cm and more, the standard volumetric impression blocks with fix rods are used. However, the registered impress of fish is not oriented in space and the rods during fishing are in the extended position. This leads to rods deformation and sinking due to accidental impacts of impression block over the borehole irregularity and finally results in faulty detection of the top end of fishing object in hole. The impression blocks with copy rods and fixed magnetic needle allow estimating the object configuration and fix the position of magnetic needle determining the position of the top end of object in hole. However, the magnetic needle fixation is realized in staged and the rods are in extended position during fishing operations as well as it is in standard design. The most efficient tool is the impression block with copy rods which directs the examined object in the borehole during readings of magnetic needles data from azimuth plate and averaging of readings. This significantly increases the accuracy of fishing toll direction. The rods during fishing are located in the body and extended only when they reach the top of fishing object
A Weighted Block Dictionary Learning Algorithm for Classification
Shi, Zhongrong
2016-01-01
Discriminative dictionary learning, playing a critical role in sparse representation based classification, has led to state-of-the-art classification results. Among the existing discriminative dictionary learning methods, two different approaches, shared dictionary and class-specific dictionary, which associate each dictionary atom to all classes or a single class, have been studied. The shared dictionary is a compact method but with lack of discriminative information; the class-specific dict...
Truncated Gauss-Newton Implementation for Multi-Parameter Full Waveform Inversion
Liu, Y.; Yang, J.; Dong, L.; Wang, Y.
2014-12-01
Full waveform inversion (FWI) is a numerical optimization method which aims at minimizing the difference between the synthetic and recorded seismic data to obtain high resolution subsurface images. A practical implementation for FWI is the adjoint-state method (AD), in which the data residuals at receiver locations are simultaneously back-propagated to form the gradient. Scattering-integral method (SI) is an alternative way which is based on the explicit building of the sensitivity kernel (Fréchet derivative matrix). Although it is more memory-consuming, SI is more efficient than AD when the number of the sources is larger than the number of the receivers. To improve the convergence of FWI, the information carried out by the inverse Hessian operator is crucial. Taking account accurately of the effect of this operator in FWI can correct illumination deficits, reserve the amplitude of the subsurface parameters, and remove artifacts generated by multiple reflections. In multi-parameter FWI, the off-diagonal blocks of the Hessian operator reflect the coupling between different parameter classes. Therefore, incorporating its inverse could help to mitigate the trade-off effects. In this study, we focus on the truncated Gauss-Newton implementation for multi-parameter FWI. The model update is computed through a matrix-free conjugate gradient solution of the Newton linear system. Both the gradient and the Hessian-vector product are calculated using the SI approach instead of the first- and second-order AD. However, the gradient expressed by kernel-vector product is calculated through the accumulation of the decomposed vector-scalar products. Thus, it's not necessary to store the huge sensitivity matrix beforehand. We call this method the matrix decomposition approach (MD). And the Hessian-vector product is replaced by two kernel-vector products which are then calculated by the above MD. By this way, we don't need to solve two additional wave propagation problems as in the
Robust and Adaptive Block Tracking Method Based on Particle Filter
Bin Sun
2015-10-01
Full Text Available In the field of video analysis and processing, object tracking is attracting more and more attention especially in traffic management, digital surveillance and so on. However problems such as objects’ abrupt motion, occlusion and complex target structures would bring difficulties to academic study and engineering application. In this paper, a fragmentsbased tracking method using the block relationship coefficient is proposed. In this method, we use particle filter algorithm and object region is divided into blocks initially. The contribution of this method is that object features are not extracted just from a single block, the relationship between current block and its neighbor blocks are extracted to describe the variation of the block. Each block is weighted according to the block relationship coefficient when the block is voted on the most matched region in next frame. This method can make full use of the relationship between blocks. The experimental results demonstrate that our method can provide good performance in condition of occlusion and abrupt posture variation.
Cunningham, C.; Simpkin, S.D.
1975-01-01
A prismatic moderator block is described which has fuel-containing channels and coolant channels disposed parallel to each other and to edge faces of the block. The coolant channels are arranged in rows on an equilateral triangular lattice pattern and the fuel-containing channels are disposed in a regular lattice pattern with one fuel-containing channel between and equidistant from each of the coolant channels in each group of three mutually adjacent coolant channels. The edge faces of the block are parallel to the rows of coolant channels and the channels nearest to each edge face are disposed in two rows parallel thereto, with one of the rows containing only coolant channels and the other row containing only fuel-containing channels. (Official Gazette)
Bussink, Barbara E; Holst, Anders Gaarsdal; Jespersen, Lasse
2013-01-01
AimsTo determine the prevalence, predictors of newly acquired, and the prognostic value of right bundle branch block (RBBB) and incomplete RBBB (IRBBB) on a resting 12-lead electrocardiogram in men and women from the general population.Methods and resultsWe followed 18 441 participants included...... in the Copenhagen City Heart Study examined in 1976-2003 free from previous myocardial infarction (MI), chronic heart failure, and left bundle branch block through registry linkage until 2009 for all-cause mortality and cardiovascular outcomes. The prevalence of RBBB/IRBBB was higher in men (1.4%/4.7% in men vs. 0.......5%/2.3% in women, P block was associated with significantly...
["Habitual" left branch block alternating with 2 "disguised" bracnch block].
Lévy, S; Jullien, G; Mathieu, P; Mostefa, S; Gérard, R
1976-10-01
Two cases of alternating left bundle branch block and "masquerading block" (with left bundle branch morphology in the stnadard leads and right bundle branch block morphology in the precordial leads) were studied by serial tracings and his bundle electrocardiography. In case 1 "the masquerading" block was associated with a first degree AV block related to a prolongation of HV interval. This case is to our knowledge the first cas of alternating bundle branch block in which his bundle activity was recorded in man. In case 2, the patient had atrial fibrilation and His bundle recordings were performed while differents degrees of left bundle branch block were present: The mechanism of the alternation and the concept of "masquerading" block are discussed. It is suggested that this type of block represents a right bundle branch block associated with severe lesions of the "left system".
Modeling the Effect of APC Truncation on Destruction Complex Function in Colorectal Cancer Cells
Barua, Dipak; Hlavacek, William S.
2013-01-01
In colorectal cancer cells, APC, a tumor suppressor protein, is commonly expressed in truncated form. Truncation of APC is believed to disrupt degradation of β—catenin, which is regulated by a multiprotein complex called the destruction complex. The destruction complex comprises APC, Axin, β—catenin, serine/threonine kinases, and other proteins. The kinases and , which are recruited by Axin, mediate phosphorylation of β—catenin, which initiates its ubiquitination and proteosomal degradation. The mechanism of regulation of β—catenin degradation by the destruction complex and the role of truncation of APC in colorectal cancer are not entirely understood. Through formulation and analysis of a rule-based computational model, we investigated the regulation of β—catenin phosphorylation and degradation by APC and the effect of APC truncation on function of the destruction complex. The model integrates available mechanistic knowledge about site-specific interactions and phosphorylation of destruction complex components and is consistent with an array of published data. We find that the phosphorylated truncated form of APC can outcompete Axin for binding to β—catenin, provided that Axin is limiting, and thereby sequester β—catenin away from Axin and the Axin-recruited kinases and . Full-length APC also competes with Axin for binding to β—catenin; however, full-length APC is able, through its SAMP repeats, which bind Axin and which are missing in truncated oncogenic forms of APC, to bring β—catenin into indirect association with Axin and Axin-recruited kinases. Because our model indicates that the positive effects of truncated APC on β—catenin levels depend on phosphorylation of APC, at the first 20-amino acid repeat, and because phosphorylation of this site is mediated by , we suggest that is a potential target for therapeutic intervention in colorectal cancer. Specific inhibition of is predicted to limit binding of β—catenin to truncated
Closed-form kinetic parameter estimation solution to the truncated data problem
Zeng, Gengsheng L; Kadrmas, Dan J; Gullberg, Grant T
2010-01-01
In a dedicated cardiac single photon emission computed tomography (SPECT) system, the detectors are focused on the heart and the background is truncated in the projections. Reconstruction using truncated data results in biased images, leading to inaccurate kinetic parameter estimates. This paper has developed a closed-form kinetic parameter estimation solution to the dynamic emission imaging problem. This solution is insensitive to the bias in the reconstructed images that is caused by the projection data truncation. This paper introduces two new ideas: (1) it includes background bias as an additional parameter to estimate, and (2) it presents a closed-form solution for compartment models. The method is based on the following two assumptions: (i) the amount of the bias is directly proportional to the truncated activities in the projection data, and (ii) the background concentration is directly proportional to the concentration in the myocardium. In other words, the method assumes that the image slice contains only the heart and the background, without other organs, that the heart is not truncated, and that the background radioactivity is directly proportional to the radioactivity in the blood pool. As long as the background activity can be modeled, the proposed method is applicable regardless of the number of compartments in the model. For simplicity, the proposed method is presented and verified using a single compartment model with computer simulations using both noiseless and noisy projections.
Transiently truncated and differentially regulated expression of midkine during mouse embryogenesis
Chen Qin; Yuan Yuanyang; Lin Shuibin; Chang Youde; Zhuo Xinming; Wei Wei; Tao Ping; Ruan Lingjuan; Li Qifu; Li Zhixing
2005-01-01
Midkine (MK) is a retinoic acid response cytokine, mostly expressed in embryonic tissues. Aberrant expression of MK was found in numerous cancers. In human, a truncated MK was expressed specifically in tumor/cancer tissues. Here we report the discovery of a novel truncated form of MK transiently expressed during normal mouse embryonic development. In addition, MK is concentrated at the interface between developing epithelium and mesenchyme as well as highly proliferating cells. Its expression, which is closely coordinated with angiogenesis and vasculogenesis, is spatiotemporally regulated with peaks in extensive organogenesis period and undifferentiated cells tailing off in maturing cells, implying its role in nascent blood vessel (endothelial) signaling of tissue differentiation and stem cell renewal/differentiation.. Cloning and sequencing analysis revealed that the embryonic truncated MK, in which the conserved domain is in-frame deleted, presumably producing a novel secreted small peptide, is different from the truncated form in human cancer tissues, whose deletion results in a frame-shift mutation. Our data suggest that MK may play a role in epithelium-mesenchyme interactions, blood vessel signaling, and the decision of proliferation vs differentiation. Detection of the transiently expressed truncated MK reveals its novel function in development and sheds light on its role in carcinogenesis
Duflot, Nicolas [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: nicolas.duflot@areva.com; Berenguer, Christophe [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: christophe.berenguer@utt.fr; Dieulle, Laurence [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: laurence.dieulle@utt.fr; Vasseur, Dominique [EPSNA Group (Nuclear PSA and Application), EDF Research and Development, 1, avenue du Gal de Gaulle, 92141 Clamart cedex (France)], E-mail: dominique.vasseur@edf.fr
2009-11-15
A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.
Duflot, Nicolas; Berenguer, Christophe; Dieulle, Laurence; Vasseur, Dominique
2009-01-01
A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.
E-Block: A Tangible Programming Tool with Graphical Blocks
Danli Wang; Yang Zhang; Shengyong Chen
2013-01-01
This paper designs a tangible programming tool, E-Block, for children aged 5 to 9 to experience the preliminary understanding of programming by building blocks. With embedded artificial intelligence, the tool defines the programming blocks with the sensors as the input and enables children to write programs to complete the tasks in the computer. The symbol on the programming block's surface is used to help children understanding the function of each block. The sequence information is transfer...
Automated synthesis and verification of configurable DRAM blocks for ASIC's
Pakkurti, M.; Eldin, A. G.; Kwatra, S. C.; Jamali, M.
1993-01-01
A highly flexible embedded DRAM compiler is developed which can generate DRAM blocks in the range of 256 bits to 256 Kbits. The compiler is capable of automatically verifying the functionality of the generated DRAM modules. The fully automated verification capability is a key feature that ensures the reliability of the generated blocks. The compiler's architecture, algorithms, verification techniques and the implementation methodology are presented.
A novel block cryptosystem based on iterating a chaotic map
Xiang Tao; Liao Xiaofeng; Tang Guoping; Chen Yong; Wong, Kwok-wo
2006-01-01
A block cryptographic scheme based on iterating a chaotic map is proposed. With random binary sequences generated from the real-valued chaotic map, the plaintext block is permuted by a key-dependent shift approach and then encrypted by the classical chaotic masking technique. Simulation results show that performance and security of the proposed cryptographic scheme are better than those of existing algorithms. Advantages and security of our scheme are also discussed in detail
Closed Loop Guidance Trade Study for Space Launch System Block-1B Vehicle
Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt
2018-01-01
NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The design of the next evolution of SLS, Block-1B, is well underway. The Block-1B vehicle is more capable overall than Block-1; however, the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS) presents a challenge to the Powered Explicit Guidance (PEG) algorithm used by Block-1. To handle the long burn durations (on the order of 1000 seconds) of EUS missions, two algorithms were examined. An alternative algorithm, OPGUID, was introduced, while modifications were made to PEG. A trade study was conducted to select the guidance algorithm for future SLS vehicles. The chosen algorithm needs to support a wide variety of mission operations: ascent burns to LEO, apogee raise burns, trans-lunar injection burns, hyperbolic Earth departure burns, and contingency disposal burns using the Reaction Control System (RCS). Additionally, the algorithm must be able to respond to a single engine failure scenario. Each algorithm was scored based on pre-selected criteria, including insertion accuracy, algorithmic complexity and robustness, extensibility for potential future missions, and flight heritage. Monte Carlo analysis was used to select the final algorithm. This paper covers the design criteria, approach, and results of this trade study, showing impacts and considerations when adapting launch vehicle guidance algorithms to a broader breadth of in-space operations.
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Linoleum Block Printing Revisited.
Chetelat, Frank J.
1980-01-01
The author discusses practical considerations of teaching linoleum block printing in the elementary grades (tool use, materials, motivation) and outlines a sequence of design concepts in this area for the primary, intermediate and junior high grades. A short list of books and audiovisual aids is appended. (SJL)
Science Teacher, 2005
2005-01-01
Curcumin, the pungent yellow spice found in both turmeric and curry powders, blocks a key biological pathway needed for development of melanoma and other cancers, according to a study that appears in the journal Cancer. Researchers from The University of Texas M. D. Anderson Cancer Center demonstrate how curcumin stops laboratory strains of…
Contaminated soil concrete blocks
de Korte, A.C.J.; Brouwers, Jos; Limbachiya, Mukesh C.; Kew, Hsein Y.
2009-01-01
According to Dutch law the contaminated soil needs to be remediated or immobilised. The main focus in this article is the design of concrete blocks, containing contaminated soil, that are suitable for large production, financial feasible and meets all technical and environmental requirements. In
The evidence for synthesis of truncated triangular silver nanoplates in the presence of CTAB
He Xin; Zhao Xiujian; Chen Yunxia; Feng Jinyang
2008-01-01
Truncated triangular silver nanoplates were prepared by a solution-phase approach, which involved the seed-mediated growth of silver nanoparticles in the presence of cetyltrimethylammonium bromide (CTAB) at 40 deg. C. The result of X-ray diffraction indicates that the as-prepared nanoparticles are made of pure face centered cubic silver. Transmission electron microscopy and atomic force microscopy studies show that the truncated triangular silver nanoplates, with edge lengths of 50 ± 5 nm and thicknesses of 27 ± 3 nm, are oriented differently on substrates of a copper grid and a fresh mica flake. The corners of these nanoplates are round. The selected area electron diffraction analysis reveals that the silver nanoplates are single crystals with an atomically flat surface. We determine the holistic morphology of truncated triangular silver nanoplates through the above measurements with the aid of computer-aided 3D perspective images
Autocorrelation as a source of truncated Lévy flights in foreign exchange rates
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2003-05-01
We suggest that the ultraslow speed of convergence associated with truncated Lévy flights (Phys. Rev. Lett. 73 (1994) 2946) may well be explained by autocorrelations in data. We show how a particular type of autocorrelation generates power laws consistent with a truncated Lévy flight. Stock exchanges have been suggested to be modeled by a truncated Lévy flight (Nature 376 (1995) 46; Physica A 297 (2001) 509; Econom. Bull. 7 (2002) 1). Here foreign exchange rate data are taken instead. Scaling power laws in the “probability of return to the origin” are shown to emerge for most currencies. A novel approach to measure how distant a process is from a Gaussian regime is presented.
Local and accumulated truncation errors in a class of perturbative numerical methods
Adam, G.; Adam, S.; Corciovei, A.
1980-01-01
The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)
The Apparent Lack of Lorentz Invariance in Zero-Point Fields with Truncated Spectra
Daywitt W. C.
2009-01-01
Full Text Available The integrals that describe the expectation values of the zero-point quantum-field-theoretic vacuum state are semi-infinite, as are the integrals for the stochastic electrodynamic vacuum. The unbounded upper limit to these integrals leads in turn to infinite energy densities and renormalization masses. A number of models have been put forward to truncate the integrals so that these densities and masses are finite. Unfortunately the truncation apparently destroys the Lorentz invariance of the integrals. This note argues that the integrals are naturally truncated by the graininess of the negative-energy Planck vacuum state from which the zero-point vacuum arises, and are thus automatically Lorentz invariant.
Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng
2014-06-20
We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.
On the viability of the truncated Israel–Stewart theory in cosmology
Shogin, Dmitry; Amundsen, Per Amund; Hervik, Sigbjørn
2015-01-01
We apply the causal Israel–Stewart theory of irreversible thermodynamics to model the matter content of the Universe as a dissipative fluid with bulk and shear viscosity. Along with the full transport equations we consider their widely used truncated version. By implementing a dynamical systems approach to Bianchi type IV and V cosmological models with and without cosmological constant, we determine the future asymptotic states of such Universes and show that the truncated Israel–Stewart theory leads to solutions essentially different from the full theory. The solutions of the truncated theory may also manifest unphysical properties. Finally, we find that in the full theory shear viscosity can give a substantial rise to dissipative fluxes, driving the fluid extremely far from equilibrium, where the linear Israel–Stewart theory ceases to be valid. (paper)
Lee, Eve J.; Chiang, Eugene
2017-01-01
Sub-Neptunes around FGKM dwarfs are evenly distributed in log orbital period down to ∼10 days, but dwindle in number at shorter periods. Both the break at ∼10 days and the slope of the occurrence rate down to ∼1 day can be attributed to the truncation of protoplanetary disks by their host star magnetospheres at corotation. We demonstrate this by deriving planet occurrence rate profiles from empirical distributions of pre-main-sequence stellar rotation periods. Observed profiles are better reproduced when planets are distributed randomly in disks—as might be expected if planets formed in situ—rather than piled up near disk edges, as would be the case if they migrated in by disk torques. Planets can be brought from disk edges to ultra-short (<1 day) periods by asynchronous equilibrium tides raised on their stars. Tidal migration can account for how ultra-short-period planets are more widely spaced than their longer-period counterparts. Our picture provides a starting point for understanding why the sub-Neptune population drops at ∼10 days regardless of whether the host star is of type FGK or early M. We predict planet occurrence rates around A stars to also break at short periods, but at ∼1 day instead of ∼10 days because A stars rotate faster than stars with lower masses (this prediction presumes that the planetesimal building blocks of planets can drift inside the dust sublimation radius).
Lee, Eve J.; Chiang, Eugene, E-mail: evelee@berkeley.edu [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States)
2017-06-10
Sub-Neptunes around FGKM dwarfs are evenly distributed in log orbital period down to ∼10 days, but dwindle in number at shorter periods. Both the break at ∼10 days and the slope of the occurrence rate down to ∼1 day can be attributed to the truncation of protoplanetary disks by their host star magnetospheres at corotation. We demonstrate this by deriving planet occurrence rate profiles from empirical distributions of pre-main-sequence stellar rotation periods. Observed profiles are better reproduced when planets are distributed randomly in disks—as might be expected if planets formed in situ—rather than piled up near disk edges, as would be the case if they migrated in by disk torques. Planets can be brought from disk edges to ultra-short (<1 day) periods by asynchronous equilibrium tides raised on their stars. Tidal migration can account for how ultra-short-period planets are more widely spaced than their longer-period counterparts. Our picture provides a starting point for understanding why the sub-Neptune population drops at ∼10 days regardless of whether the host star is of type FGK or early M. We predict planet occurrence rates around A stars to also break at short periods, but at ∼1 day instead of ∼10 days because A stars rotate faster than stars with lower masses (this prediction presumes that the planetesimal building blocks of planets can drift inside the dust sublimation radius).
Zhang Shunchuan
2010-09-01
Full Text Available Abstract Duck virus enteritis (DVE is an acute, contagious herpesvirus infection of ducks, geese, and swans, which has produced significant economic losses in domestic and wild waterfowl. With the purpose of decreasing economic losses in the commercial duck industry, studying the unknown glycoprotein K (gK of DEV may be a new method for preferably preventing and curing this disease. So this is the first time to product and purify the rabbit anti-tgK polyclonal antibody. Through the western blot and ELISA assay, the truncated glycoprotein K (tgK has good antigenicity, also the antibody possesses high specificity and affinity. Meanwhile the rabbit anti-tgK polyclonal antibody has the potential to produce subunit vaccines and the functions of neutralizing DEV and anti-DEV infection because of its neutralization titer. Indirect immunofluorescent microscopy using the purified rabbit anti-tgK polyclonal antibody as diagnostic antibody was susceptive to detect a small quantity of antigen in tissues or cells. This approach also provides effective experimental technology for epidemiological investigation and retrospective diagnose of the preservative paraffin blocks.
The Dynamics of Truncated Black Hole Accretion Disks. I. Viscous Hydrodynamic Case
Hogg, J. Drew; Reynolds, Christopher S. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States)
2017-07-10
Truncated accretion disks are commonly invoked to explain the spectro-temporal variability in accreting black holes in both small systems, i.e., state transitions in galactic black hole binaries (GBHBs), and large systems, i.e., low-luminosity active galactic nuclei (LLAGNs). In the canonical truncated disk model of moderately low accretion rate systems, gas in the inner region of the accretion disk occupies a hot, radiatively inefficient phase, which leads to a geometrically thick disk, while the gas in the outer region occupies a cooler, radiatively efficient phase that resides in the standard geometrically thin disk. Observationally, there is strong empirical evidence to support this phenomenological model, but a detailed understanding of the dynamics of truncated disks is lacking. We present a well-resolved viscous, hydrodynamic simulation that uses an ad hoc cooling prescription to drive a thermal instability and, hence, produce the first sustained truncated accretion disk. With this simulation, we perform a study of the dynamics, angular momentum transport, and energetics of a truncated disk. We find that the time variability introduced by the quasi-periodic transition of gas from efficient cooling to inefficient cooling impacts the evolution of the simulated disk. A consequence of the thermal instability is that an outflow is launched from the hot/cold gas interface, which drives large, sub-Keplerian convective cells into the disk atmosphere. The convective cells introduce a viscous θ − ϕ stress that is less than the generic r − ϕ viscous stress component, but greatly influences the evolution of the disk. In the truncated disk, we find that the bulk of the accreted gas is in the hot phase.
Fluorometric graphene oxide-based detection of Salmonella enteritis using a truncated DNA aptamer.
Chinnappan, Raja; AlAmer, Saleh; Eissa, Shimaa; Rahamn, Anas Abdel; Abu Salah, Khalid M; Zourob, Mohammed
2017-12-18
The work describes a fluorescence-based study for mapping the highest affinity truncated aptamer from the full length sequence and its integration in a graphene oxide platform for the detection of Salmonella enteriditis. To identify the best truncated sequence, molecular beacons and a displacement assay design are applied. In the fluorescence displacement assay, the truncated aptamer was hybridized with fluorescein and quencher-labeled complementary sequences to form a fluorescence/quencher pair. In the presence of S. enteritidis, the aptamer dissociates from the complementary labeled oligonucleotides and thus, the fluorescein/quencher pair becomes physically separated. This leads to an increase in fluorescence intensity. One of the truncated aptamers identified has a 2-fold lower dissociation constant (3.2 nM) compared to its full length aptamer (6.3 nM). The truncated aptamer selected in this process was used to develop a fluorometric graphene oxide (GO) based assay. If fluorescein-labeled aptamer is adsorbed on GO via π stacking interaction, fluorescence is quenched. However, in the presence of target (S. enteriditis), the labeled aptamers is released from surface to form a stable complex with the bacteria and fluorescence is restored, depending on the quantity of bacteria being present. The resulting assay has an unsurpassed detection limit of 25 cfu·mL -1 in the best case. The cross reactivity to Salmonella typhimurium, Staphylococcus aureus and Escherichia coli is negligible. The assay was applied to analyze doped milk samples for and gave good recovery. Thus, we believe that the truncated aptamer/graphene oxide platform is a potential tool for the detection of S. Enteritidis. Graphical abstract Fluorescently labelled aptamer against Salmonella enteritidis was adsorbed on the surface of graphene oxide by π-stacking interaction. This results in quenching of the fluorescence of the label. Addition of Salmonella enteritidis restores fluorescence, and this
Pressure-sensitive paint on a truncated cone in hypersonic flow at incidences
Yang, L.; Erdem, E.; Zare-Behtash, H.; Kontis, K.; Saravanan, S.
2012-01-01
Highlights: ► Global pressure map over the truncated cone is obtained at various incidence angles in Mach 5 flow. ► Successful application of AA-PSP in hypersonic flow expands operation area of this technique. ► AA-PSP reveals complex three-dimensional pattern which is difficult for transducer to obtain. ► Quantitative data provides strong correlation with colour Schlieren and oil flow results. ► High spatial resolution pressure mappings identify small scale vortices and flow separation. - Abstract: The flow over a truncated cone is a classical and fundamental problem for aerodynamic research due to its three-dimensional and complicated characteristics. The flow is made more complex when examining high angles of incidence. Recently these types of flows have drawn more attention for the purposes of drag reduction in supersonic/hypersonic flows. In the present study the flow over a truncated cone at various incidences was experimentally investigated in a Mach 5 flow with a unit Reynolds number of 13.5 × 10 6 m −1 . The cone semi-apex angle is 15° and the truncation ratio (truncated length/cone length) is 0.5. The incidence of the model varied from −12° to 12° with 3° intervals relative to the freestream direction. The external flow around the truncated cone was visualised by colour Schlieren photography, while the surface flow pattern was revealed using the oil flow method. The surface pressure distribution was measured using the anodized aluminium pressure-sensitive paint (AA-PSP) technique. Both top and sideviews of the pressure distribution on the model surface were acquired at various incidences. AA-PSP showed high pressure sensitivity and captured the complicated flow structures which correlated well with the colour Schlieren and oil flow visualisation results.
Diagonal Limit for Conformal Blocks in d Dimensions
Hogervorst, Matthijs; Rychkov, Slava
2013-01-01
Conformal blocks in any number of dimensions depend on two variables z, zbar. Here we study their restrictions to the special "diagonal" kinematics z = zbar, previously found useful as a starting point for the conformal bootstrap analysis. We show that conformal blocks on the diagonal satisfy ordinary differential equations, third-order for spin zero and fourth-order for the general case. These ODEs determine the blocks uniquely and lead to an efficient numerical evaluation algorithm. For equal external operator dimensions, we find closed-form solutions in terms of finite sums of 3F2 functions.
The Truncated Lognormal Distribution as a Luminosity Function for SWIFT-BAT Gamma-Ray Bursts
Lorenzo Zaninetti
2016-11-01
Full Text Available The determination of the luminosity function (LF in Gamma ray bursts (GRBs depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here, we analyze three cosmologies: the standard cosmology, the plasma cosmology and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law and, secondly, by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.
Causal analysis of ordinal treatments and binary outcomes under truncation by death.
Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua
2017-06-01
It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.
Determination of αS from scaling violations of truncated moments of structure functions
Forte, Stefano; Latorre, J.I.; Magnea, Lorenzo; Piccione, Andrea
2002-01-01
We determine the strong coupling α S (M Z ) from scaling violations of truncated moments of the nonsinglet deep inelastic structure function F 2 . Truncated moments are determined from BCDMS and NMC data using a neural network parametrization which retains the full experimental information on errors and correlations. Our method minimizes all sources of theoretical uncertainty and bias which characterize extractions of α S from scaling violations. We obtain α S (M Z )=0.124 +0.004 -0.007 (exp.) +0.003 -0.004 (th.)
Application of the AMPLE cluster-and-truncate approach to NMR structures for molecular replacement
Bibby, Jaclyn [University of Liverpool, Liverpool L69 7ZB (United Kingdom); Keegan, Ronan M. [Research Complex at Harwell, STFC Rutherford Appleton Laboratory, Didcot OX11 0FA (United Kingdom); Mayans, Olga [University of Liverpool, Liverpool L69 7ZB (United Kingdom); Winn, Martyn D. [Science and Technology Facilities Council Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Rigden, Daniel J., E-mail: drigden@liv.ac.uk [University of Liverpool, Liverpool L69 7ZB (United Kingdom)
2013-11-01
Processing of NMR structures for molecular replacement by AMPLE works well. AMPLE is a program developed for clustering and truncating ab initio protein structure predictions into search models for molecular replacement. Here, it is shown that its core cluster-and-truncate methods also work well for processing NMR ensembles into search models. Rosetta remodelling helps to extend success to NMR structures bearing low sequence identity or high structural divergence from the target protein. Potential future routes to improved performance are considered and practical, general guidelines on using AMPLE are provided.
Fusion events lead to truncation of FOS in epithelioid hemangioma of bone
van IJzendoorn, David G P; de Jong, Danielle; Romagosa, Cleofe
2015-01-01
in exon 4 of the FOS gene and the fusion event led to the introduction of a stop codon. In all instances, the truncation of the FOS gene would result in the loss of the transactivation domain (TAD). Using FISH probes we found a break in the FOS gene in two additional cases, in none of these cases...... differential diagnosis of vascular tumors of bone. Our data suggest that the translocation causes truncation of the FOS protein, with loss of the TAD, which is thereby a novel mechanism involved in tumorigenesis....
Solving Schwinger-Dyson equations by truncation in zero-dimensional scalar quantum field theory
Okopinska, A.
1991-01-01
Three sets of Schwinger-Dyson equations, for all Green's functions, for connected Green's functions, and for proper vertices, are considered in scalar quantum field theory. A truncation scheme applied to the three sets gives three different approximation series for Green's functions. For the theory in zero-dimensional space-time the results for respective two-point Green's functions are compared with the exact value calculated numerically. The best convergence of the truncation scheme is obtained for the case of proper vertices
Exploring atmospheric blocking with GPS radio occultation observations
L. Brunner
2016-04-01
Full Text Available Atmospheric blocking has been closely investigated in recent years due to its impact on weather and climate, such as heat waves, droughts, and flooding. We use, for the first time, satellite-based observations from Global Positioning System (GPS radio occultation (RO and explore their ability to resolve blocking in order to potentially open up new avenues complementing models and reanalyses. RO delivers globally available and vertically highly resolved profiles of atmospheric variables such as temperature and geopotential height (GPH. Applying a standard blocking detection algorithm, we find that RO data robustly capture blocking as demonstrated for two well-known blocking events over Russia in summer 2010 and over Greenland in late winter 2013. During blocking episodes, vertically resolved GPH gradients show a distinct anomalous behavior compared to climatological conditions up to 300 hPa and sometimes even further up into the tropopause. The accompanying increase in GPH of up to 300 m in the upper troposphere yields a pronounced tropopause height increase. Corresponding temperatures rise up to 10 K in the middle and lower troposphere. These results demonstrate the feasibility and potential of RO to detect and resolve blocking and in particular to explore the vertical structure of the atmosphere during blocking episodes. This new observation-based view is available globally at the same quality so that blocking in the Southern Hemisphere can also be studied with the same reliability as in the Northern Hemisphere.
Error Concealment using Neural Networks for Block-Based Image Coding
M. Mokos
2006-06-01
Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.
Cache-Oblivious Algorithms and Data Structures
Brodal, Gerth Stølting
2004-01-01
Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Abdominal wall blocks in adults
Børglum, Jens; Gögenür, Ismail; Bendtsen, Thomas F
2016-01-01
been introduced with success. Future research should also investigate the effect of specific abdominal wall blocks on neuroendocrine and inflammatory stress response after surgery. Summary USG abdominal wall blocks in adults are commonplace techniques today. Most abdominal wall blocks are assigned......Purpose of review Abdominal wall blocks in adults have evolved much during the last decade; that is, particularly with the introduction of ultrasound-guided (USG) blocks. This review highlights recent advances of block techniques within this field and proposes directions for future research....... Recent findings Ultrasound guidance is now considered the golden standard for abdominal wall blocks in adults, even though some landmark-based blocks are still being investigated. The efficiency of USG transversus abdominis plane blocks in relation to many surgical procedures involving the abdominal wall...
Improved 3-D turbomachinery CFD algorithm
Janus, J. Mark; Whitfield, David L.
1988-01-01
The building blocks of a computer algorithm developed for the time-accurate flow analysis of rotating machines are described. The flow model is a finite volume method utilizing a high resolution approximate Riemann solver for interface flux definitions. This block LU implicit numerical scheme possesses apparent unconditional stability. Multi-block composite gridding is used to orderly partition the field into a specified arrangement. Block interfaces, including dynamic interfaces, are treated such as to mimic interior block communication. Special attention is given to the reduction of in-core memory requirements by placing the burden on secondary storage media. Broad applicability is implied, although the results presented are restricted to that of an even blade count configuration. Several other configurations are presently under investigation, the results of which will appear in subsequent publications.
SNUPPS power block engineering
Thompson, C A [Bechtel Power Corp., San Francisco, Calif. (USA)
1975-11-01
The Standard Power Block is based on a modular concept and consists of the following: turbine building, auxiliary building, fuel building, control building, radwaste building, diesel generators building, and outside storage tanks and transformers. Each power block unit includes a Westinghouse pressurized water reactor and has a thermal power rating of 3425 MW(t). The corresponding General Electric turbine generator net electrical output is 1188 MW(e). This standardization approach results in not only a reduction in the costs of engineering, licensing, procurement, and project planning, but should also result in additional savings by the application of experience gained in the construction of the first unit to the following units and early input of construction data to design.
Berlin, Joey
2017-04-01
Proponents of a block grant or per-capita cap trumpet them as vehicles for the federal government to give the states a capped amount of funding for Medicaid that legislatures would effectively distribute how they see fit. Questions abound as to what capped Medicaid funding would look like, and what effect it would have on the current Medicaid-eligible population, covered services, and physician payments.
SUPERFICIAL CERVICAL PLEXUS BLOCK
Komang Mega Puspadisari
2014-01-01
Full Text Available Superficial cervical plexus block is one of the regional anesthesia in neck were limited to thesuperficial fascia. Anesthesia is used to relieve pain caused either during or after the surgery iscompleted. This technique can be done by landmark or with ultrasound guiding. The midpointof posterior border of the Sternocleidomastoid was identified and the prosedure done on thatplace or on the level of cartilage cricoid.
Algorithm for Compressing Time-Series Data
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
ProxImaL: efficient image optimization using proximal algorithms
Heide, Felix; Diamond, Steven; Nieß ner, Matthias; Ragan-Kelley, Jonathan; Heidrich, Wolfgang; Wetzstein, Gordon
2016-01-01
domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety
E-Block: A Tangible Programming Tool with Graphical Blocks
Danli Wang
2013-01-01
Full Text Available This paper designs a tangible programming tool, E-Block, for children aged 5 to 9 to experience the preliminary understanding of programming by building blocks. With embedded artificial intelligence, the tool defines the programming blocks with the sensors as the input and enables children to write programs to complete the tasks in the computer. The symbol on the programming block's surface is used to help children understanding the function of each block. The sequence information is transferred to computer by microcomputers and then translated into semantic information. The system applies wireless and infrared technologies and provides user with feedbacks on both screen and programming blocks. Preliminary user studies using observation and user interview methods are shown for E-Block's prototype. The test results prove that E-Block is attractive to children and easy to learn and use. The project also highlights potential advantages of using single chip microcomputer (SCM technology to develop tangible programming tools for children.
Neto, A.; Cavallo, D.; Gerini, G.
2011-01-01
This paper presents a Green's function based procedure to assess edge effects in finite wideband connected arrays. Truncation effects are more severe in broadband arrays, since the inter-element mutual coupling facilitates the propagation of edge-born waves that can become dominant over large
Jolani, Shahab
2014-01-01
For a vector of multivariate normal when some elements, but not necessarily all, are truncated, we derive the moment generating function and obtain expressions for the first two moments involving the multivariate hazard gradient. To show one of many applications of these moments, we then extend the
Buch-Kromann, Tine; Nielsen, Jens
2012-01-01
This paper introduces a multivariate density estimator for truncated and censored data with special emphasis on extreme values based on survival analysis. A local constant density estimator is considered. We extend this estimator by means of tail flattening transformation, dimension reducing prior...
On the Computation of Optimal Monotone Mean-Variance Portfolios via Truncated Quadratic Utility
Ales Cerný; Fabio Maccheroni; Massimo Marinacci; Aldo Rustichini
2008-01-01
We report a surprising link between optimal portfolios generated by a special type of variational preferences called divergence preferences (cf. [8]) and optimal portfolios generated by classical expected utility. As a special case we connect optimization of truncated quadratic utility (cf. [2]) to the optimal monotone mean-variance portfolios (cf. [9]), thus simplifying the computation of the latter.
Wolf, D.; Keblinski, P.; Phillpot, S.R.; Eggebrecht, J.
1999-01-01
Based on a recent result showing that the net Coulomb potential in condensed ionic systems is rather short ranged, an exact and physically transparent method permitting the evaluation of the Coulomb potential by direct summation over the r -1 Coulomb pair potential is presented. The key observation is that the problems encountered in determining the Coulomb energy by pairwise, spherically truncated r -1 summation are a direct consequence of the fact that the system summed over is practically never neutral. A simple method is developed that achieves charge neutralization wherever the r -1 pair potential is truncated. This enables the extraction of the Coulomb energy, forces, and stresses from a spherically truncated, usually charged environment in a manner that is independent of the grouping of the pair terms. The close connection of our approach with the Ewald method is demonstrated and exploited, providing an efficient method for the simulation of even highly disordered ionic systems by direct, pairwise r -1 summation with spherical truncation at rather short range, i.e., a method which fully exploits the short-ranged nature of the interactions in ionic systems. The method is validated by simulations of crystals, liquids, and interfacial systems, such as free surfaces and grain boundaries. copyright 1999 American Institute of Physics
Truncated SALL1 Impedes Primary Cilia Function in Townes-Brocks Syndrome
Bozal-Basterra, Laura; Martín-Ruíz, Itziar; Pirone, Lucia
2018-01-01
by mutations in the gene encoding the transcriptional repressor SALL1 and is associated with the presence of a truncated protein that localizes to the cytoplasm. Here, we provide evidence that SALL1 mutations might cause TBS by means beyond its transcriptional capacity. By using proximity proteomics, we show...
Electron transfer reactions, cyanide and O2 binding of truncated hemoglobin from Bacillus subtilis
Fernandez, Esther; Larsson, Jonas T.; McLean, Kirsty J.
2013-01-01
The truncated hemoglobin from Bacillus subtilis (trHb-Bs) possesses a surprisingly high affinity for oxygen and resistance to (auto)oxidation; its physiological role in the bacterium is not understood and may be connected with its very special redox and ligand binding reactions. Electron transfer...
Five-dimensional truncation of the plane incompressible navier-stokes equations
Boldrighini, C [Camerino Univ. (Italy). Istituto di Matematica; Franceschini, V [Modena Univ. (Italy). Istituto Matematico
1979-01-01
A five-modes truncation of the Navier-Stokes equations for a two dimensional incompressible fluid on a torus is considered. A computer analysis shows that for a certain range of the Reynolds number the system exhibits a stochastic behaviour, approached through an involved sequence of bifurcations.
Carrier, P.; Remp, H.J.; Chaborel, J.P.; Lallement, M.; Bussiere, F.; Darcourt, J.; Lallement, M.; Leblanc-Talent, P.; Machiavello, J.C.; Ettore, F.
2004-01-01
The sentinel lymph node (SNL) detection in breast cancer has been recently validated. It allows the reduction of the number of axillary dissections and their corresponding side effects. We tested a simple method of image truncation in order to improve the sensitivity of lymphoscintigraphy. This approach is justified by the magnitude of uptake difference between the injection site and the SNL. We prospectively investigated SNL detection using a triple method (lymphoscintigraphy, blue dye and surgical radio detection) in 130 patients. SNL was identified in 104 of the 132 patients (80%) using the standard images and in 126 of them (96, 9%) using the truncated images. Blue dye detection and surgical radio detection had a sensitivity of 76,9% and 98,5% respectively. The false negative rate was 10,3%. 288 SNL were dissected, 31 were metastatic. Among the 19 patients with metastatic SNL and more than one SNL detected, the metastatic SNL was not the hottest in 9 of them. 28 metastatic SNL were detected Y on truncated images versus only 19 on standard images. Truncation which dramatically increases the sensitivity of lymphoscintigraphy allows to increase the number of dissected SNL and probably reduces the false negative rate. (author)
A computational approach for fluid queues driven by truncated birth-death processes.
Lenin, R.B.; Parthasarathy, P.R.
2000-01-01
In this paper, we analyze fluid queues driven by truncated birth-death processes with general birth and death rates. We compute the equilibrium distribution of the content of the fluid buffer by providing efficient numerical procedures to compute the eigenvalues and the eigenvectors of the
Organisation and melting of solution grown truncated lozenge polyethylene single crystals
Loos, J.; Tian, M.
2003-01-01
Morphological features and the melting behaviour of truncated lozenge crystals have been studied. For the crystals investigated, the heights of the (110) and the (200) sectors were measured to be 14.5 and 12.7 nm, respectively, using atomic force microscopy (AFM) in contact and non-contact mode.
Low-mode truncation methods in the sine-Gordon equation
Xiong Chuyu.
1991-01-01
In this dissertation, the author studies the chaotic and coherent motions (i.e., low-dimensional chaotic attractor) in some near integrable partial differential equations, particularly the sine-Gordon equation and the nonlinear Schroedinger equation. In order to study the motions, he uses low mode truncation methods to reduce these partial differential equations to some truncated models (low-dimensional ordinary differential equations). By applying many methods available to low-dimensional ordinary differential equations, he can understand the low-dimensional chaotic attractor of PDE's much better. However, there are two important questions one needs to answer: (1) How many modes is good enough for the low mode truncated models to capture the dynamics uniformly? (2) Is the chaotic attractor in a low mode truncated model close to the chaotic attractor in the original PDE? And how close is? He has developed two groups of powerful methods to help to answer these two questions. They are the computation methods of continuation and local bifurcation, and local Lyapunov exponents and Lyapunov exponents. Using these methods, he concludes that the 2N-nls ODE is a good model for the sine-Gordon equation and the nonlinear Schroedinger equation provided one chooses a 'good' basis and uses 'enough' modes (where 'enough' depends on the parameters of the system but is small for the parameter studied here). Therefore, one can use 2N-nls ODE to study the chaos of PDE's in more depth
The Most Developmentally Truncated Fishes Show Extensive Hox Gene Loss and Miniaturized Genomes
Malmstrøm, Martin; Britz, Ralf; Matschiner, Michael; Tørresen, Ole K; Hadiaty, Renny Kurnia; Yaakob, Norsham; Tan, Heok Hui; Jakobsen, Kjetill Sigurd; Salzburger, Walter; Rüber, Lukas
2018-01-01
Abstract The world’s smallest fishes belong to the genus Paedocypris. These miniature fishes are endemic to an extreme habitat: the peat swamp forests in Southeast Asia, characterized by highly acidic blackwater. This threatened habitat is home to a large array of fishes, including a number of miniaturized but also developmentally truncated species. Especially the genus Paedocypris is characterized by profound, organism-wide developmental truncation, resulting in sexually mature individuals of <8 mm in length with a larval phenotype. Here, we report on evolutionary simplification in the genomes of two species of the dwarf minnow genus Paedocypris using whole-genome sequencing. The two species feature unprecedented Hox gene loss and genome reduction in association with their massive developmental truncation. We also show how other genes involved in the development of musculature, nervous system, and skeleton have been lost in Paedocypris, mirroring its highly progenetic phenotype. Further, our analyses suggest two mechanisms responsible for the genome streamlining in Paedocypris in relation to other Cypriniformes: severe intron shortening and reduced repeat content. As the first report on the genomic sequence of a vertebrate species with organism-wide developmental truncation, the results of our work enhance our understanding of genome evolution and how genotypes are translated to phenotypes. In addition, as a naturally simplified system closely related to zebrafish, Paedocypris provides novel insights into vertebrate development. PMID:29684203
Importance-truncated shell model for multi-shell valence spaces
Stumpf, Christina; Vobig, Klaus; Roth, Robert [Institut fuer Kernphysik, TU Darmstadt (Germany)
2016-07-01
The valence-space shell model is one of the work horses in nuclear structure theory. In traditional applications, shell-model calculations are carried out using effective interactions constructed in a phenomenological framework for rather small valence spaces, typically spanned by one major shell. We improve on this traditional approach addressing two main aspects. First, we use new effective interactions derived in an ab initio approach and, thus, establish a connection to the underlying nuclear interaction providing access to single- and multi-shell valence spaces. Second, we extend the shell model to larger valence spaces by applying an importance-truncation scheme based on a perturbative importance measure. In this way, we reduce the model space to the relevant basis states for the description of a few target eigenstates and solve the eigenvalue problem in this physics-driven truncated model space. In particular multi-shell valence spaces are not tractable otherwise. We combine the importance-truncated shell model with refined extrapolation schemes to approximately recover the exact result. We present first results obtained in the importance-truncated shell model with the newly derived ab initio effective interactions for multi-shell valence spaces, e.g., the sdpf shell.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
Gökhan Gökdere
2014-05-01
Full Text Available In this paper, closed form expressions for the moments of the truncated Pareto order statistics are obtained by using conditional distribution. We also derive some results for the moments which will be useful for moment computations based on ordered data.
Integral equation solution for truncated slab structures by using a fringe current formulation
Jørgensen, Erik; Toccafondi, A.; Maci, S.
1999-01-01
Full-wave solutions of truncated dielectric slab problems are interesting for a variety of engineering applications, in particular patch antennas on finite ground planes. For this application a canonical reference solution is that of a semi-infinite slab illuminated by a line source. Standard int...
Family losses following truncation selection in populations of half-sib families
J. H. Roberds; G. Namkoong; H. Kang
1980-01-01
Family losses during truncation selection may be sizable in populations of half-sib families. Substantial losses may occur even in populations containing little or no variation among families. Heavier losses will occur, however, under conditions of high heritability where there is considerable family variation. Standard deviations and therefore variances of family loss...
No evidence that protein truncating variants in BRIP1 are associated with breast cancer risk
Easton, Douglas F; Lesueur, Fabienne; Decker, Brennan
2016-01-01
BACKGROUND: BRCA1 interacting protein C-terminal helicase 1 (BRIP1) is one of the Fanconi Anaemia Complementation (FANC) group family of DNA repair proteins. Biallelic mutations in BRIP1 are responsible for FANC group J, and previous studies have also suggested that rare protein truncating variants...
Tao, Rumao; Si, Lei; Ma, Yanxing; Zhou, Pu; Liu, Zejin
2012-08-10
The propagation properties of coherently combined truncated laser beam arrays with beam distortions through non-Kolmogorov turbulence are studied in detail both analytically and numerically. The analytical expressions for the average intensity and the beam width of coherently combined truncated laser beam arrays with beam distortions propagating through turbulence are derived based on the combination of statistical optics methods and the extended Huygens-Fresnel principle. The effect of beam distortions, such as amplitude modulation and phase fluctuation, is studied by numerical examples. The numerical results reveal that phase fluctuations have significant influence on the spreading of coherently combined truncated laser beam arrays in non-Kolmogorov turbulence, and the effects of the phase fluctuations can be negligible as long as the phase fluctuations are controlled under a certain level, i.e., a>0.05 for the situation considered in the paper. Furthermore, large phase fluctuations can convert the beam distribution rapidly to a Gaussian form, vary the spreading, weaken the optimum truncation effects, and suppress the dependence of spreading on the parameters of the non-Kolmogorov turbulence.
A computational approach for a fluid queue driven by a truncated birth-death process
Lenin, R.B.; Parthasarathy, P.R.
1999-01-01
In this paper, we consider a fluid queue driven by a truncated birth-death process with general birth and death rates. We find the equilibrium distribution of the content of the fluid buffer by computing the eigenvalues and eigenvectors of an associated real tridiagonal matrix. We provide efficient
The Most Developmentally Truncated Fishes Show Extensive Hox Gene Loss and Miniaturized Genomes.
Malmstrøm, Martin; Britz, Ralf; Matschiner, Michael; Tørresen, Ole K; Hadiaty, Renny Kurnia; Yaakob, Norsham; Tan, Heok Hui; Jakobsen, Kjetill Sigurd; Salzburger, Walter; Rüber, Lukas
2018-04-01
The world's smallest fishes belong to the genus Paedocypris. These miniature fishes are endemic to an extreme habitat: the peat swamp forests in Southeast Asia, characterized by highly acidic blackwater. This threatened habitat is home to a large array of fishes, including a number of miniaturized but also developmentally truncated species. Especially the genus Paedocypris is characterized by profound, organism-wide developmental truncation, resulting in sexually mature individuals of <8 mm in length with a larval phenotype. Here, we report on evolutionary simplification in the genomes of two species of the dwarf minnow genus Paedocypris using whole-genome sequencing. The two species feature unprecedented Hox gene loss and genome reduction in association with their massive developmental truncation. We also show how other genes involved in the development of musculature, nervous system, and skeleton have been lost in Paedocypris, mirroring its highly progenetic phenotype. Further, our analyses suggest two mechanisms responsible for the genome streamlining in Paedocypris in relation to other Cypriniformes: severe intron shortening and reduced repeat content. As the first report on the genomic sequence of a vertebrate species with organism-wide developmental truncation, the results of our work enhance our understanding of genome evolution and how genotypes are translated to phenotypes. In addition, as a naturally simplified system closely related to zebrafish, Paedocypris provides novel insights into vertebrate development.
Bogdan Gheorghe Munteanu
2013-01-01
Full Text Available Using the stochastic approximations, in this paper it was studiedthe convergence in distribution of the fractional parts of the sum of random variables to the truncated exponential distribution with parameter lambda. This fact is feasible by means of the Fourier-Stieltjes sequence (FSS of the random variable.
Parkkila, P.; Štefl, Martin; Olžyńska, Agnieszka; Hof, Martin; Kinnunen, P. K. J.
2015-01-01
Roč. 1848, č. 1 (2015), s. 167-173 ISSN 0005-2736 R&D Projects: GA ČR GBP208/12/G016 Institutional support: RVO:61388955 Keywords : Oxidatively truncated phosphatidylcholines * Lateral diffusion * Fluorescence correlation spectroscopy Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.687, year: 2015
Block and sub-block boundary strengthening in lath martensite
Du, C.; Hoefnagels, J.P.M.; Vaes, R.; Geers, M.G.D.
2016-01-01
Well-defined uniaxial micro-tensile tests were performed on lath martensite single block specimens and multi-block specimens with different number of block boundaries parallel to the loading direction. Detailed slip trace analyses consistently revealed that in the {110}<111> slip system with the
Motion of isolated open vortex filaments evolving under the truncated local induction approximation
Van Gorder, Robert A.
2017-11-01
The study of nonlinear waves along open vortex filaments continues to be an area of active research. While the local induction approximation (LIA) is attractive due to locality compared with the non-local Biot-Savart formulation, it has been argued that LIA appears too simple to model some relevant features of Kelvin wave dynamics, such as Kelvin wave energy transfer. Such transfer of energy is not feasible under the LIA due to integrability, so in order to obtain a non-integrable model, a truncated LIA, which breaks the integrability of the classical LIA, has been proposed as a candidate model with which to study such dynamics. Recently Laurie et al. ["Interaction of Kelvin waves and nonlocality of energy transfer in superfluids," Phys. Rev. B 81, 104526 (2010)] derived truncated LIA systematically from Biot-Savart dynamics. The focus of the present paper is to study the dynamics of a section of common open vortex filaments under the truncated LIA dynamics. We obtain the analog of helical, planar, and more general filaments which rotate without a change in form in the classical LIA, demonstrating that while quantitative differences do exist, qualitatively such solutions still exist under the truncated LIA. Conversely, solitons and breather solutions found under the LIA should not be expected under the truncated LIA, as the existence of such solutions relies on the existence of an infinite number of conservation laws which is violated due to loss of integrability. On the other hand, similarity solutions under the truncated LIA can be quite different to their counterparts found for the classical LIA, as they must obey a t1/3 type scaling rather than the t1/2 type scaling commonly found in the LIA and Biot-Savart dynamics. This change in similarity scaling means that Kelvin waves are radiated at a slower rate from vortex kinks formed after reconnection events. The loss of soliton solutions and the difference in similarity scaling indicate that dynamics emergent under
Fast algorithms for transport models. Final report
Manteuffel, T.A.
1994-01-01
This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))
Habitat Blocks and Wildlife Corridors
Vermont Center for Geographic Information — Habitat blocks are areas of contiguous forest and other natural habitats that are unfragmented by roads, development, or agriculture. Vermonts habitat blocks are...
Atrioventricular block, ECG tracing (image)
... an abnormal rhythm (arrhythmia) called an atrioventricular (AV) block. P waves show that the top of the ... wave (and heart contraction), there is an atrioventricular block, and a very slow pulse (bradycardia).
Research of Block-Based Motion Estimation Methods for Video Compression
Tropchenko Andrey
2016-08-01
Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.
Trilateral market coupling. Algorithm appendix
2006-03-01
Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the participants in
Trilateral market coupling. Algorithm appendix
NONE
2006-03-15
Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the
Cipher block based authentication module: A hardware design perspective
Michail, H.E.; Schinianakis, D.; Goutis, C.E.; Kakarountas, A.P.; Selimis, G.
2011-01-01
Message Authentication Codes (MACs) are widely used in order to authenticate data packets, which are transmitted thought networks. Typically MACs are implemented using modules like hash functions and in conjunction with encryption algorithms (like Block Ciphers), which are used to encrypt the
Survey and Benchmark of Block Ciphers for Wireless Sensor Networks
Law, Y.W.; Doumen, J.M.; Hartel, Pieter H.
Cryptographic algorithms play an important role in the security architecture of wireless sensor networks (WSNs). Choosing the most storage- and energy-efficient block cipher is essential, due to the facts that these networks are meant to operate without human intervention for a long period of time
Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip
2017-10-01
In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.
Fermion-scalar conformal blocks
Iliesiu, Luca [Joseph Henry Laboratories, Princeton University,Washington Road, Princeton, NJ 08544 (United States); Kos, Filip [Department of Physics, Yale University,217 Prospect Street, New Haven, CT 06520 (United States); Poland, David [Department of Physics, Yale University,217 Prospect Street, New Haven, CT 06520 (United States); School of Natural Sciences, Institute for Advanced Study,1 Einstein Dr, Princeton, New Jersey 08540 (United States); Pufu, Silviu S. [Joseph Henry Laboratories, Princeton University,Washington Road, Princeton, NJ 08544 (United States); Simmons-Duffin, David [School of Natural Sciences, Institute for Advanced Study,1 Einstein Dr, Princeton, New Jersey 08540 (United States); Yacoby, Ran [Joseph Henry Laboratories, Princeton University,Washington Road, Princeton, NJ 08544 (United States)
2016-04-13
We compute the conformal blocks associated with scalar-scalar-fermion-fermion 4-point functions in 3D CFTs. Together with the known scalar conformal blocks, our result completes the task of determining the so-called ‘seed blocks’ in three dimensions. Conformal blocks associated with 4-point functions of operators with arbitrary spins can now be determined from these seed blocks by using known differential operators.
Powder wastes confinement block and manufacturing process of this block
Dagot, L.; Brunel, G.
1996-01-01
This invention concerns a powder wastes containment block and a manufacturing process of this block. In this block, the waste powder is encapsulated in a thermo hardening polymer as for example an epoxy resin, the encapsulated resin being spread into cement. This block can contain between 45 and 55% in mass of wastes, between 18 and 36% in mass of polymer and between 14 and 32% in mass of cement. Such a containment block can be used for the radioactive wastes storage. (O.M.). 4 refs
Konstantin A. Shapovalov
2013-01-01
Full Text Available The article considers general approach to structured particle and particle system form factor calculation in the Rayleigh-Gans-Debye (RGD approximation. Using this approach, amplitude of light scattering by a truncated pyramid and cone formulas in RGD approximation are obtained. Light scattering indicator by a truncated pyramid and cone in the RGD approximation are calculated.
Martini, Enrica; Breinbjerg, Olav; Maci, Stefano
2006-01-01
A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...
Building Curriculum during Block Play
Andrews, Nicole
2015-01-01
Blocks are not just for play! In this article, Nicole Andrews describes observing the interactions of three young boys enthusiastically engaged in the kindergarten block center of their classroom, using blocks in a building project that displayed their ability to use critical thinking skills, physics exploration, and the development of language…
Wenk, E.
1976-01-01
A suggestion is made not to lead the separated nuclear 'waste' from spent nuclear fuel elements directly to end storage, but to make use of the heat produced from the remaining radiation, e.g. for seawater desalination. According to the invention, the activated fission products are to be processed, e.g. by calcination or vitrification, so that one can handle them. They should then be arranged in layers alternately with plate-shaped heat conducting pipes to form a homogeneous block; the heat absorbed by the thermal plates should be further passed on to evaporators or heat exchangers. (UWI) [de
Blocking the Hawking radiation
Autzen, M.; Kouvaris, C.
2014-01-01
grows after its formation (and eventually destroys the star) instead of evaporating. The fate of the black hole is dictated by the two opposite mechanics, i.e., accretion of nuclear matter from the center of the star and Hawking radiation that tends to decrease the mass of the black hole. We study how...... the assumptions for the accretion rate can in fact affect the critical mass beyond which a black hole always grows. We also study to what extent degenerate nuclear matter can impede Hawking radiation due to the fact that emitted particles can be Pauli blocked at the core of the star....
Using an Augmented Lagrangian Method and block fracturing in the DDA method
Lin, C.T.; Amadei, B.; Sture, S.
1994-01-01
This paper presents two extensions to the Discontinuous Deformation Analysis (DDA) method orginally proposed by Shi for modeling the response of blocky rock masses to mechanical loading. The first extension consists of improving the block contact algorithm. An Augmented Lagrangian Method is used to replace the Penalty Method orginally proposed. It allows Lagrange multipliers to be introduced without increasing the number of equations that need to be solved and thus, block contract forces can be calculated more accurately. A block fracturing capability based on a three-parameter Mohr-Coulomb criterion represents the second extension. It allows for shear or tensile fracturing of intact blocks and the formation of smaller blocks
Ivkovic, M; Zdravkovic, Z; Sotic, O [Department of Reactor Physics and Dynamics, Boris Kidric Institute of nuclear sciences Vinca, Belgrade (Yugoslavia)
1966-04-15
A graphite block was calibrated for the thermal neutron flux of the Ra-Be source using indium foils as detectors. Experimental values of the thermal neutron flux along the central vertical axis of the system were corrected for the self-shielding effect and depression of flux in the detector. The experimental values obtained were compared with the values calculated on the basis of solving the conservation neutron equation by the continuous slowing-down theory. In this theoretical calculation of the flux the Ra-Be source was divided into three resonance energy regions. The measurement of the thermal neutron diffusion length in the standard graphite block is described. The measurements were performed in the thermal neutron region of the system. The experimental results were interpreted by the diffusion theory for point thermal neutron source in the finite system. The thermal neutron diffusion length was calculated to be L= 50.9 {+-}3.1 cm for the following graphite characteristics: density = 1.7 g/cm{sup 3}; boron content = 0.1 ppm; absorption cross section = 3.7 mb.
Ivkovic, M.; Zdravkovic, Z.; Sotic, O.
1966-04-01
A graphite block was calibrated for the thermal neutron flux of the Ra-Be source using indium foils as detectors. Experimental values of the thermal neutron flux along the central vertical axis of the system were corrected for the self-shielding effect and depression of flux in the detector. The experimental values obtained were compared with the values calculated on the basis of solving the conservation neutron equation by the continuous slowing-down theory. In this theoretical calculation of the flux the Ra-Be source was divided into three resonance energy regions. The measurement of the thermal neutron diffusion length in the standard graphite block is described. The measurements were performed in the thermal neutron region of the system. The experimental results were interpreted by the diffusion theory for point thermal neutron source in the finite system. The thermal neutron diffusion length was calculated to be L= 50.9 ±3.1 cm for the following graphite characteristics: density = 1.7 g/cm 3 ; boron content = 0.1 ppm; absorption cross section = 3.7 mb
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Seismic noise attenuation using an online subspace tracking algorithm
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
The wild tapered block bootstrap
Hounyo, Ulrich
In this paper, a new resampling procedure, called the wild tapered block bootstrap, is introduced as a means of calculating standard errors of estimators and constructing confidence regions for parameters based on dependent heterogeneous data. The method consists in tapering each overlapping block...... of the series first, the applying the standard wild bootstrap for independent and heteroscedastic distrbuted observations to overlapping tapered blocks in an appropriate way. Its perserves the favorable bias and mean squared error properties of the tapered block bootstrap, which is the state-of-the-art block......-order asymptotic validity of the tapered block bootstrap as well as the wild tapered block bootstrap approximation to the actual distribution of the sample mean is also established when data are assumed to satisfy a near epoch dependent condition. The consistency of the bootstrap variance estimator for the sample...
The truncated Wigner method for Bose-condensed gases: limits of validity and applications
Sinatra, Alice; Lobo, Carlos; Castin, Yvan
2002-01-01
We study the truncated Wigner method applied to a weakly interacting spinless Bose-condensed gas which is perturbed away from thermal equilibrium by a time-dependent external potential. The principle of the method is to generate an ensemble of classical fields ψ(r) which samples the Wigner quasi-distribution function of the initial thermal equilibrium density operator of the gas, and then to evolve each classical field with the Gross-Pitaevskii equation. In the first part of the paper we improve the sampling technique over our previous work (Sinatra et al 2000 J. Mod. Opt. 47 2629-44) and we test its accuracy against the exactly solvable model of the ideal Bose gas. In the second part of the paper we investigate the conditions of validity of the truncated Wigner method. For short evolution times it is known that the time-dependent Bogoliubov approximation is valid for almost pure condensates. The requirement that the truncated Wigner method reproduces the Bogoliubov prediction leads to the constraint that the number of field modes in the Wigner simulation must be smaller than the number of particles in the gas. For longer evolution times the nonlinear dynamics of the noncondensed modes of the field plays an important role. To demonstrate this we analyse the case of a three-dimensional spatially homogeneous Bose-condensed gas and we test the ability of the truncated Wigner method to correctly reproduce the Beliaev-Landau damping of an excitation of the condensate. We have identified the mechanism which limits the validity of the truncated Wigner method: the initial ensemble of classical fields, driven by the time-dependent Gross-Pitaevskii equation, thermalizes to a classical field distribution at a temperature T class which is larger than the initial temperature T of the quantum gas. When T class significantly exceeds T a spurious damping is observed in the Wigner simulation. This leads to the second validity condition for the truncated Wigner method, T class - T
Functional analysis of Rift Valley fever virus NSs encoding a partial truncation.
Head, Jennifer A; Kalveram, Birte; Ikegami, Tetsuro
2012-01-01
Rift Valley fever virus (RVFV), belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN)-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR). NSs self-associates at the C-terminus 17 aa., while NSs at aa.210-230 binds to Sin3A-associated protein (SAP30) to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s) can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.
Functional analysis of Rift Valley fever virus NSs encoding a partial truncation.
Jennifer A Head
Full Text Available Rift Valley fever virus (RVFV, belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR. NSs self-associates at the C-terminus 17 aa., while NSs at aa.210-230 binds to Sin3A-associated protein (SAP30 to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.
Novel prediction- and subblock-based algorithm for fractal image compression
Chung, K.-L.; Hsu, C.-H.
2006-01-01
Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated
Heliostat blocking and shadowing efficiency in the video-game era
Ramos, Alberto [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Ramos, Francisco [Nevada Software Informatica S.L., Madrid (Spain)
2014-02-15
Blocking and shadowing is one of the key effects in designing and evaluating a thermal central receiver solar tower plant. Therefore it is convenient to develop efficient algorithms to compute the area of an heliostat blocked or shadowed by the rest of the field. In this paper we explore the possibility of using very efficient clipping algorithms developed for the video game and imaging industry to compute the blocking and shadowing efficiency of a solar thermal plant layout. We propose an algorithm valid for arbitrary position, orientation and size of the heliostats. This algorithm turns out to be very accurate, free of assumptions and fast. We show the feasibility of the use of this algorithm to the optimization of a solar plant by studying a couple of examples in detail.
Heliostat blocking and shadowing efficiency in the video-game era
Ramos, Alberto
2014-02-01
Blocking and shadowing is one of the key effects in designing and evaluating a thermal central receiver solar tower plant. Therefore it is convenient to develop efficient algorithms to compute the area of an heliostat blocked or shadowed by the rest of the field. In this paper we explore the possibility of using very efficient clipping algorithms developed for the video game and imaging industry to compute the blocking and shadowing efficiency of a solar thermal plant layout. We propose an algorithm valid for arbitrary position, orientation and size of the heliostats. This algorithm turns out to be very accurate, free of assumptions and fast. We show the feasibility of the use of this algorithm to the optimization of a solar plant by studying a couple of examples in detail.
Synthesis algorithm of VLSI multipliers for ASIC
Chua, O. H.; Eldin, A. G.
1993-01-01
Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.
Nearest Neighbour Corner Points Matching Detection Algorithm
Zhang Changlong
2015-01-01
Full Text Available Accurate detection towards the corners plays an important part in camera calibration. To deal with the instability and inaccuracies of present corner detection algorithm, the nearest neighbour corners match-ing detection algorithms was brought forward. First, it dilates the binary image of the photographed pictures, searches and reserves quadrilateral outline of the image. Second, the blocks which accord with chess-board-corners are classified into a class. If too many blocks in class, it will be deleted; if not, it will be added, and then let the midpoint of the two vertex coordinates be the rough position of corner. At last, it precisely locates the position of the corners. The Experimental results have shown that the algorithm has obvious advantages on accuracy and validity in corner detection, and it can give security for camera calibration in traffic accident measurement.
Modeling of genetic algorithms with a finite population
C.H.M. van Kemenade
1997-01-01
textabstractCross-competition between non-overlapping building blocks can strongly influence the performance of evolutionary algorithms. The choice of the selection scheme can have a strong influence on the performance of a genetic algorithm. This paper describes a number of different genetic
Hanberg, Peter Jesper; Jørgensen, Anders Michael
2014-01-01
efficiency of about 15% for commercial Silicon solar cells there is still much to gain. DTU Danchip provides research facilities, equipment and expertise for the building blocks that comprises fabricating the efficient solar cell. In order to get more of the sun light into the device we provide thin film......Photovoltaics (PV), better known as solar cells, are now a common day sight on many rooftops in Denmark.The installed capacity of PV systems worldwide is growing exponentially1 and is the third most importantrenewable energy source today. The cost of PV is decreasing fast with ~10%/year but to make...... it directcompetitive with fossil energy sources a further reduction is needed. By increasing the efficiency of the solar cells one gain an advantage through the whole chain of cost. So that per produced Watt of power less material is spent, installation costs are lower, less area is used etc. With an average...
Akinci, Devrim; Akhan, Okan
2005-01-01
Pain occurs frequently in patients with advanced cancers. Tumors originating from upper abdominal viscera such as pancreas, stomach, duodenum, proximal small bowel, liver and biliary tract and from compressing enlarged lymph nodes can cause severe abdominal pain, which do not respond satisfactorily to medical treatment or radiotherapy. Percutaneous celiac ganglia block (CGB) can be performed with high success and low complication rates under imaging guidance to obtain pain relief in patients with upper abdominal malignancies. A significant relationship between pain relief and degree of tumoral celiac ganglia invasion according to CT features was described in the literature. Performing the procedure in the early grades of celiac ganglia invasion on CT can increase the effectiveness of the CGB, which is contrary to World Health Organization criteria stating that CGB must be performed in patients with advanced stage cancer. CGB may also be effectively performed in patients with chronic pancreatitis for pain palliation
Scheler, Fabian; Mitzlaff, Martin; Schröder-Preikschat, Wolfgang
Die Entscheidung, einen zeit- bzw. ereignisgesteuerten Ansatz für ein Echtzeitsystem zu verwenden, ist schwierig und sehr weitreichend. Weitreichend vor allem deshalb, weil diese beiden Ansätze mit äußerst unterschiedlichen Kontrollflussabstraktionen verknüpft sind, die eine spätere Migration zum anderen Paradigma sehr schwer oder gar unmöglich machen. Wir schlagen daher die Verwendung einer Zwischendarstellung vor, die unabhängig von der jeweils verwendeten Kontrollflussabstraktion ist. Für diesen Zweck verwenden wir auf Basisblöcken basierende Atomic Basic Blocks (ABB) und bauen darauf ein Werkzeug, den Real-Time Systems Compiler (RTSC) auf, der die Migration zwischen zeit- und ereignisgesteuerten Systemen unterstützt.
dr.Nageh Omar
2005-01-01
Full Text Available These group of Architectural Fragments have been discovered during Excavations at Souq el – Khamees Site at the end of Mostorod Street in el – Matarya Area by the Supreme Council of Antiquities Mission Season 2003 and none published before . The Site of Excavations is Situated about 500 metres to the west Obelisk of the King Senusert I According to the inscriptions on the block (pl.1.a,fig.1 represents the coronation name of the king Senusret III, the fifth king of the twelfth dynasty within the cartouche .Through This recent discover and his Sphinx statue we Suggest that the king Senusret III built a shrine or Temple at Heliopols which was possibly a part of the great Temple of the universal God of Heliopolis . For block dating to the king Akhenaten and many monuments are discovered in Heliopolis at the same period emphasized that the king Akhenaten built temple for the god Aten in Heliopolis and through Studies about the king Akhenaten, we suggest that the king Akhenaten take his new principles from Heliopolis . The king Ramesses II mentioned from stela which discovered at Manshyt el- Sader, in the second horizontal line that he erected oblesk and some statues at the great Temple in Heliopolis , this recent Discover about Statue of the king Ramesses II emphasized site of excavations perhaps a shrine or open court from temple of the king Ramesses II at the great Temple in Heliopolis For nbt – htpt, we could show that the goddess Hathor take a forward position in Heliopolis and become the Lady of Hetepet in Heliopolis since Eighteenth dynasty at least
A Motion Estimation Algorithm Using DTCWT and ARPS
Unan Y. Oktiawati
2013-09-01
Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.
A truncated accretion disk in the galactic black hole candidate source H1743-322
Sriram, Kandulapati; Agrawal, Vivek Kumar; Rao, Arikkala Raghurama
2009-01-01
To investigate the geometry of the accretion disk in the source H1743-322, we have carried out a detailed X-ray temporal and spectral study using RXTE pointed observations. We have selected all data pertaining to the Steep Power Law (SPL) state during the 2003 outburst of this source. We find anti-correlated hard X-ray lags in three of the observations and the changes in the spectral and timing parameters (like the QPO frequency) confirm the idea of a truncated accretion disk in this source. Compiling data from similar observations of other sources, we find a correlation between the fractional change in the QPO frequency and the observed delay. We suggest that these observations indicate a definite size scale in the inner accretion disk (the radius of the truncated disk) and we explain the observed correlation using various disk parameters like Compton cooling time scale, viscous time scale etc. (research papers)
Truncation effects in the functional renormalization group study of spontaneous symmetry breaking
Defenu, N.; Mati, P.; Márián, I.G.; Nándori, I.; Trombettoni, A.
2015-01-01
We study the occurrence of spontaneous symmetry breaking (SSB) for O(N) models using functional renormalization group techniques. We show that even the local potential approximation (LPA) when treated exactly is sufficient to give qualitatively correct results for systems with continuous symmetry, in agreement with the Mermin-Wagner theorem and its extension to systems with fractional dimensions. For general N (including the Ising model N=1) we study the solutions of the LPA equations for various truncations around the zero field using a finite number of terms (and different regulators), showing that SSB always occurs even where it should not. The SSB is signalled by Wilson-Fisher fixed points which for any truncation are shown to stay on the line defined by vanishing mass beta functions.
Baudhuin, Linnea M; Kotzer, Katrina E; Lagerstedt, Susan A
2015-03-01
Marfan syndrome is a systemic disorder that typically involves FBN1 mutations and cardiovascular manifestations. We investigated FBN1 genotype-phenotype correlations with aortic events (aortic dissection and prophylactic aortic surgery) in patients with Marfan syndrome. Genotype and phenotype information from probands (n = 179) with an FBN1 pathogenic or likely pathogenic variant were assessed. A higher frequency of truncating or splicing FBN1 variants was observed in Ghent criteria-positive patients with an aortic event (n = 34) as compared with all other probands (n = 145) without a reported aortic event (79 vs. 39%; P Marfan syndrome patients with FBN1 truncating and splicing variants.Genet Med 17 3, 177-187.
Virtues and limitations of the truncated Holstein–Primakoff description of quantum rotors
Hirsch, Jorge G; Castaños, Octavio; López-Peña, Ramón; Nahmad-Achar, Eduardo
2013-01-01
A Hamiltonian describing the collective behaviour of N interacting spins can be mapped to a bosonic one employing the Holstein–Primakoff realization, at the expense of having an infinite series in powers of the boson creation and annihilation operators. Truncating this series up to quadratic terms allows obtaining analytic solutions through a Bogoliubov transformation, which becomes exact in the limit N → ∞. The Hamiltonian exhibits a phase transition from single-spin excitations to a collective mode. In the vicinity of this phase transition, the truncated solutions predict the existence of singularities for a finite number of spins, which have no counterpart in the exact diagonalization. Renormalization allows to extract from these divergences the exact behaviour of relevant observables with the number of spins around the phase transition, and to relate it with the class of universality to which the model belongs. In this work a detailed analysis of these aspects is presented for the Lipkin model. (comment)
The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data
I.E. Okorie
2017-06-01
Full Text Available The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood (−ℓˆ, Akaike information criterion (AIC, Bayesian information criterion (BIC and the generalized Cramér–von Mises W⋆ statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions. Keywords: Mathematics, Applied mathematics
The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.
Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C
2017-06-01
The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.
Houghton, J.C.
1988-01-01
The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.
Narukawa, Masaki; Nohara, Katsuhito
2018-04-01
This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ab Initio Study of 40Ca with an Importance Truncated No-Core Shell Model
Roth, R; Navratil, P
2007-05-22
We propose an importance truncation scheme for the no-core shell model, which enables converged calculations for nuclei well beyond the p-shell. It is based on an a priori measure for the importance of individual basis states constructed by means of many-body perturbation theory. Only the physically relevant states of the no-core model space are considered, which leads to a dramatic reduction of the basis dimension. We analyze the validity and efficiency of this truncation scheme using different realistic nucleon-nucleon interactions and compare to conventional no-core shell model calculations for {sup 4}He and {sup 16}O. Then, we present the first converged calculations for the ground state of {sup 40}Ca within no-core model spaces including up to 16{h_bar}{Omega}-excitations using realistic low-momentum interactions. The scheme is universal and can be easily applied to other quantum many-body problems.
Versatility of the CFR algorithm for limited angle reconstruction
Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.
1990-01-01
The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant
Winter, A. J.; Clarke, C. J.; Rosotti, G.; Ih, J.; Facchini, S.; Haworth, T. J.
2018-04-01
Most stars form and spend their early life in regions of enhanced stellar density. Therefore the evolution of protoplanetary discs (PPDs) hosted by such stars are subject to the influence of other members of the cluster. Physically, PPDs might be truncated either by photoevaporation due to ultraviolet flux from massive stars, or tidal truncation due to close stellar encounters. Here we aim to compare the two effects in real cluster environments. In this vein we first review the properties of well studied stellar clusters with a focus on stellar number density, which largely dictates the degree of tidal truncation, and far ultraviolet (FUV) flux, which is indicative of the rate of external photoevaporation. We then review the theoretical PPD truncation radius due to an arbitrary encounter, additionally taking into account the role of eccentric encounters that play a role in hot clusters with a 1D velocity dispersion σv ≳ 2 km/s. Our treatment is then applied statistically to varying local environments to establish a canonical threshold for the local stellar density (nc ≳ 104 pc-3) for which encounters can play a significant role in shaping the distribution of PPD radii over a timescale ˜3 Myr. By combining theoretical mass loss rates due to FUV flux with viscous spreading in a PPD we establish a similar threshold for which a massive disc is completely destroyed by external photoevaporation. Comparing these thresholds in local clusters we find that if either mechanism has a significant impact on the PPD population then photoevaporation is always the dominating influence.
A truncating mutation of HDAC2 in human cancers confers resistance to histone deacetylase inhibition
Ropero, S; Fraga, MF; Ballestar, E
2006-01-01
Disruption of histone acetylation patterns is a common feature of cancer cells, but very little is known about its genetic basis. We have identified truncating mutations in one of the primary human histone deacetylases, HDAC2, in sporadic carcinomas with microsatellite instability and in tumors a...... deacetylase inhibitors. As such drugs may serve as therapeutic agents for cancer, our findings support the use of HDAC2 mutational status in future pharmacogenetic treatment of these individuals....
Truncation of the many body hierarchy and relaxation times in the McKean model
Schmitt, K.J.
1987-01-01
In the McKean model the BBGKY-hierarchy is equivalent to a simple hierarchy of coupled equations for the p-particle correlation functions. Truncation effects and the convergence of the one-particle distribution towards its exact shape have been studied. In the long time limit the equations can be solved in a closed form. It turns out that the p-particle correlation decays p-times faster than the non-equilibrium one-particle distribution
Race, Brent; Meade-White, Kimberly; Race, Richard; Baumann, Frank; Aguzzi, Adriano; Chesebro, Bruce
2009-01-01
Prion protein (PrP) is a host-encoded membrane-anchored glycoprotein which is required for susceptibility to prion disease. PrP may also be important for normal brain functions such as hippocampal spatial memory. Previously transgenic mice expressing amino terminally truncated mouse PrP (Δ32–134) spontaneously developed a fatal disease associated with degeneration of cerebellar granular neurons as well as vacuolar degeneration of deep cerebellar and brain stem white matter. This disease could...
Loads experiments study on two-story RC box and truncated conical walls
Asega, H.; Iizuka, S.; Kurihara, I.; Kubo, T.
1987-01-01
The failure modes of the two specimens were the sliding shear failure. The two specimens showed almost equal deformation at the maximum shear strength. The ratio of the flexural deformation in the deformation of the truncated conical was larger than that of the box wall. The ratio of the shear deformation in the deformation of the two-story RC box wall was larger than that of the flexural deformation. (orig./HP)
Barros, R.C. de; Larsen, E.W.
1991-01-01
A generalization of the one-group Spectral Green's Function (SGF) method is developed for multigroup, slab-geometry discrete ordinates (S N ) problems. The multigroup SGF method is free from spatial truncation errors; it generated numerical values for the cell-edge and cell-average angular fluxes that agree with the analytic solution of the multigroup S N equations. Numerical results are given to illustrate the method's accuracy
Zamani, J.; Soltani, B.; Aghaei, M.
2014-01-01
An elastic solution of cylinder-truncated cone shell intersection under internal pressure is presented. The edge solution theory that has been used in this study takes bending moments and shearing forces into account in the thin-walled shell of revolution element. The general solution of the cone equations is based on power series method. The effect of cone apex angle on the stress distribution in conical and cylindrical parts of structure is investigated. In addition, the effect of the inter...
Seniority truncation in an equations-of-motion approach to the shell model
Covello, A.; Andreozzi, F.; Gargano, A.; Porrino, A.
1989-01-01
This paper presents an equations-of-motion method for treating shell-model problems within the framework of the seniority scheme. This method can be applied at many levels of approximation and represents therefore a valuable tool to further reduce seniority truncated shell-model spaces. To show its practical value the authors report some results of an extensive study of the N = 82 isotones which is currently under way
Scavenger receptor AI/II truncation, lung function and COPD: a large population-based study
Thomsen, M; Nordestgaard, B G; Tybjærg-Hansen, Anne
2011-01-01
The scavenger receptor A-I/II (SRA-I/II) on alveolar macrophages is involved in recognition and clearance of modified lipids and inhaled particulates. A rare variant of the SRA-I/II gene, Arg293X, truncates the distal collagen-like domain, which is essential for ligand recognition. We tested...... whether the Arg293X variant is associated with reduced lung function and risk of chronic obstructive pulmonary disease (COPD) in the general population....
Selective apoptosis induction in MCF-7 cell line by truncated minimal functional region of Apoptin
Shen Ni, Lim; Allaudin, Zeenathul Nazariah bt; Mohd Lila, Mohd Azmi b; Othman, Abas Mazni b; Othman, Fauziah bt
2013-01-01
Chicken Anemia Virus (CAV) VP3 protein (also known as Apoptin), a basic and proline-rich protein has a unique capability in inducing apoptosis in cancer cells but not in normal cells. Five truncated Apoptin proteins were analyzed to determine their selective ability to migrate into the nucleus of human breast adenocarcinoma MCF-7 cells for inducing apoptosis. For identification of the minimal selective domain for apoptosis, the wild-type Apoptin gene had been reconstructed by PCR to generate segmental deletions at the N’ terminal and linked with nuclear localization sites (NLS1 and NLS2). All the constructs were fused with maltose-binding protein gene and individually expressed by in vitro Rapid Translation System. Standardized dose of proteins were delivered into human breast adenocarcinoma MCF-7 cells and control human liver Chang cells by cytoplasmic microinjection, and subsequently observed for selective apoptosis effect. Three of the truncated Apoptin proteins with N-terminal deletions spanning amino acid 32–83 retained the cancer selective nature of wild-type Apoptin. The proteins were successfully translocated to the nucleus of MCF-7 cells initiating apoptosis, whereas non-toxic cytoplasmic retention was observed in normal Chang cells. Whilst these truncated proteins retained the tumour-specific death effector ability, the specificity for MCF-7 cells was lost in two other truncated proteins that harbor deletions at amino acid 1–31. The detection of apoptosing normal Chang cells and MCF-7 cells upon cytoplasmic microinjection of these proteins implicated a loss in Apoptin’s signature targeting activity. Therefore, the critical stretch spanning amino acid 1–31 at the upstream of a known hydrophobic leucine-rich stretch (LRS) was strongly suggested as one of the prerequisite region in Apoptin for cancer targeting. Identification of this selective domain provides a platform for developing small targets to facilitating carrier-mediated-transport across
Characterization of mTOR-Responsive Truncated mRNAs in Cell Proliferation
2017-07-01
These findings identify a previously uncharacterized role for mTOR in modulating 3’- UTR length of mRNAs by alternative polyadenylation ( APA ). Another...outcome of APA in the mTOR-activated transcriptome is an early termination of mRNA transcription to produce truncated mRNAs with polyadenylation in...for exhaustive analysis of Alternative cleavage and polyadenylation ( APA ) events (Figure 1). In IntMAP, first the position of multiple
Analysis and design of optimized truncated scarfed nozzles subject to external flow effects
Shyne, Rickey J.; Keith, Theo G., Jr.
1990-01-01
Rao's method for computing optimum thrust nozzles is modified to study the effects of external flow on the performance of a class of exhaust nozzles. Members of this class are termed scarfed nozzles. These are two-dimensional, nonsymmetric nozzles with a flat lower wall. The lower wall (the cowl) is truncated in order to save weight. Results from a parametric investigation are presented to show the effects of the external flowfield on performance.
On the Minimax Value in the Scale Model with Truncated Data
Gajek, Leslaw
1988-01-01
Let $X$ be a positive random variable with Lebesgue density $f_\\theta(x)$, where $\\theta$ is the scale parameter, and let $Y$ be a positive random variable independent of $X$. We consider two models of truncation: the LHS model, where the data consist only of those observations of $X$ for which $X > Y$; and the RHS model, where the data consist of those observations of $X$ for which $X \\leq Y$. Consider the problem of estimating $\\theta^s, s \
Truncated exponential-rigid-rotor model for strong electron and ion rings
Larrabee, D.A.; Lovelace, R.V.; Fleischmann, H.H.
1979-01-01
A comprehensive study of exponential-rigid-rotor equilibria for strong electron and ion rings indicates the presence of a sizeable percentage of untrapped particles in all equilibria with aspect-ratios R/a approximately <4. Such aspect-ratios are required in fusion-relevant rings. Significant changes in the equilibria are observed when untrapped particles are excluded by the use of a truncated exponential-rigid-rotor distribution function. (author)
TRUNCATION OF THE MANY BODY HIERARCHY AND RELAXATION TIMES IN THE McKEAN MODEL
Schmitt , K.-J.
1987-01-01
In the McKean model the BBGKY-hierarchy is equivalent to a simple hierarchy of coupled equations for the p-particle correlation functions. Truncation effects and the convergence of the one-particle distribution towards its exact shape have been studied. In the long time limit the equations can be solved in a closed form. It turns out that the p-particle correlation decays p-times faster than the non-equilibrium one-particle distribution.
Electro-optical study of nanoscale Al-Si-truncated conical photodetector with subwavelength aperture
Karelits, Matityahu; Mandelbaum, Yaakov; Chelly, Avraham; Karsenty, Avi
2017-10-01
A type of silicon photodiode has been designed and simulated to probe the optical near field and detect evanescent waves. These waves convey subwavelength resolution. This photodiode consists of a truncated conical shaped, silicon Schottky diode having a subwavelength aperture of 150 nm. Electrical and electro-optical simulations have been conducted. These results are promising toward the fabrication of a new generation of photodetector devices.
Puig-Saus, C; Laborda, E; Rodríguez-García, A; Cascalló, M; Moreno, R; Alemany, R
2014-02-01
Adenovirus (Ad) i-leader protein is a small protein of unknown function. The C-terminus truncation of the i-leader protein increases Ad release from infected cells and cytotoxicity. In the current study, we use the i-leader truncation to enhance the potency of an oncolytic Ad. In vitro, an i-leader truncated oncolytic Ad is released faster to the supernatant of infected cells, generates larger plaques, and is more cytotoxic in both human and Syrian hamster cell lines. In mice bearing human tumor xenografts, the i-leader truncation enhances oncolytic efficacy. However, in a Syrian hamster pancreatic tumor model, which is immunocompetent and less permissive to human Ad, antitumor efficacy is only observed when the i-leader truncated oncolytic Ad, but not the non-truncated version, is combined with gemcitabine. This synergistic effect observed in the Syrian hamster model was not seen in vitro or in immunodeficient mice bearing the same pancreatic hamster tumors, suggesting a role of the immune system in this synergism. These results highlight the interest of the i-leader C-terminus truncation because it enhances the antitumor potency of an oncolytic Ad and provides synergistic effects with gemcitabine in the presence of an immune competent system.
Hrnkova, Miroslava; Zilka, Norbert; Minichova, Zuzana; Koson, Peter; Novak, Michal
2007-01-26
Human truncated tau protein is an active constituent of the neurofibrillary degeneration in sporadic Alzheimer's disease. We have shown that modified tau protein, when expressed as a transgene in rats, induced AD characteristic tau cascade consisting of tau hyperphosphorylation, formation of argyrophilic tangles and sarcosyl-insoluble tau complexes. These pathological changes led to the functional impairment characterized by a variety of neurobehavioural symptoms. In the present study we have focused on the behavioural alterations induced by transgenic expression of human truncated tau. Transgenic rats underwent a battery of behavioural tests involving cognitive- and sensorimotor-dependent tasks accompanied with neurological assessment at the age of 4.5, 6 and 9 months. Behavioural examination of these rats showed altered spatial navigation in Morris water maze resulting in less time spent in target quadrant (popen field was not influenced by transgene expression. However beam walking test revealed that transgenic rats developed progressive sensorimotor disturbances related to the age of tested animals. The disturbances were most pronounced at the age of 9 months (p<0.01). Neurological alterations indicating impaired reflex responses were other added features of behavioural phenotype of this novel transgenic rat. These results allow us to suggest that neurodegeneration, caused by the non-mutated human truncated tau derived from sporadic human AD, result in the neuronal dysfunction consequently leading to the progressive neurobehavioural impairment.
Kovacech, B; Novak, M
2010-12-01
Deposits of the misfolded neuronal protein tau are major hallmarks of neurodegeneration in Alzheimer's disease (AD) and other tauopathies. The etiology of the transformation process of the intrinsically disordered soluble protein tau into the insoluble misordered aggregate has attracted much attention. Tau undergoes multiple modifications in AD, most notably hyperphosphorylation and truncation. Hyperphosphorylation is widely regarded as the hottest candidate for the inducer of the neurofibrillary pathology. However, the true nature of the impetus that initiates the whole process in the human brains remains unknown. In AD, several site-specific tau cleavages were identified and became connected to the progression of the disease. In addition, western blot analyses of tau species in AD brains reveal multitudes of various truncated forms. In this review we summarize evidence showing that tau truncation alone is sufficient to induce the complete cascade of neurofibrillary pathology, including hyperphosphorylation and accumulation of misfolded insoluble forms of tau. Therefore, proteolytical abnormalities in the stressed neurons and production of aberrant tau cleavage products deserve closer attention and should be considered as early therapeutic targets for Alzheimer's disease.
High-yield water-based synthesis of truncated silver nanocubes
Chang, Yun-Min; Lu, I-Te; Chen, Chih-Yuan; Hsieh, Yu-Chi; Wu, Pu-Wei
2014-01-01
Highlights: • Development of a water-based formula to fabricate truncated Ag nanocubes. • The sample exhibits (1 0 0), (1 1 0), and (1 1 1) on the facets, edges, and corners. • The sample shows three characteristic absorption peaks due to plasma resonance. -- Abstract: A high-yield water-based hydrothermal synthesis was developed using silver nitrate, ammonia, glucose, and cetyltrimethylammonium bromide (CTAB) as precursors to synthesize truncated silver nanocubes with uniform sizes and in large quantities. With a fixed CTAB concentration, truncated silver nanocubes with sizes of 49.3 ± 4.1 nm were produced when the molar ratio of glucose/silver cation was maintained at 0.1. The sample exhibited (1 0 0), (1 1 0), and (1 1 1) planes on the facets, edges, and corners, respectively. In contrast, with a slightly larger glucose/silver cation ratio of 0.35, well-defined nanocubes with sizes of 70.9 ± 3.8 nm sizes were observed with the (1 0 0) plane on six facets. When the ratio was further increased to 1.5, excess reduction of silver cations facilitated the simultaneous formation of nanoparticles with cubic, spherical, and irregular shapes. Consistent results were obtained from transmission electron microscopy, scanning electron microscopy, X-ray diffraction, and UV–visible absorption measurements
High-yield water-based synthesis of truncated silver nanocubes
Chang, Yun-Min; Lu, I-Te; Chen, Chih-Yuan; Hsieh, Yu-Chi; Wu, Pu-Wei, E-mail: ppwu@mail.nctu.edu.tw
2014-02-15
Highlights: • Development of a water-based formula to fabricate truncated Ag nanocubes. • The sample exhibits (1 0 0), (1 1 0), and (1 1 1) on the facets, edges, and corners. • The sample shows three characteristic absorption peaks due to plasma resonance. -- Abstract: A high-yield water-based hydrothermal synthesis was developed using silver nitrate, ammonia, glucose, and cetyltrimethylammonium bromide (CTAB) as precursors to synthesize truncated silver nanocubes with uniform sizes and in large quantities. With a fixed CTAB concentration, truncated silver nanocubes with sizes of 49.3 ± 4.1 nm were produced when the molar ratio of glucose/silver cation was maintained at 0.1. The sample exhibited (1 0 0), (1 1 0), and (1 1 1) planes on the facets, edges, and corners, respectively. In contrast, with a slightly larger glucose/silver cation ratio of 0.35, well-defined nanocubes with sizes of 70.9 ± 3.8 nm sizes were observed with the (1 0 0) plane on six facets. When the ratio was further increased to 1.5, excess reduction of silver cations facilitated the simultaneous formation of nanoparticles with cubic, spherical, and irregular shapes. Consistent results were obtained from transmission electron microscopy, scanning electron microscopy, X-ray diffraction, and UV–visible absorption measurements.
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
Correlations between chaos in a perturbed sine-Gordon equation and a truncated model system
Bishop, A.R.; Flesch, R.; Forests, M.G.; Overman, E.A.
1990-01-01
The purpose of this paper is to present a first step toward providing coordinates and associated dynamics for low-dimensional attractors in nearly integrable partial differential equations (pdes), in particular, where the truncated system reflects salient geometric properties of the pde. This is achieved by correlating: (1) numerical results on the bifurcations to temporal chaos with spatial coherence of the damped, periodically forced sine-Gordon equation with periodic boundary conditions; (2) an interpretation of the spatial and temporal bifurcation structures of this perturbed integrable system with regard to the exact structure of the sine-Gordon phase space; (3) a model dynamical systems problem, which is itself a perturbed integrable Hamiltonian system, derived from the perturbed sine-Gordon equation by a finite mode Fourier truncation in the nonlinear Schroedinger limit; and (4) the bifurcations to chaos in the truncated phase space. In particular, a potential source of chaos in both the pde and the model ordinary differential equation systems is focused on: the existence of homoclinic orbits in the unperturbed integrable phase space and their continuation in the perturbed problem. The evidence presented here supports the thesis that the chaotic attractors of the weakly perturbed periodic sine-Gordon system consists of low-dimensional metastable attacking states together with intermediate states that are O(1) unstable and correspond to homoclinic states in the integrable phase space. It is surmised that the chaotic dynamics on these attractors is due to the perturbation of these homocline integrable configurations
Non-linear buckling of an FGM truncated conical shell surrounded by an elastic medium
Sofiyev, A.H.; Kuruoglu, N.
2013-01-01
In this paper, the non-linear buckling of the truncated conical shell made of functionally graded materials (FGMs) surrounded by an elastic medium has been studied using the large deformation theory with von Karman–Donnell-type of kinematic non-linearity. A two-parameter foundation model (Pasternak-type) is used to describe the shell–foundation interaction. The FGM properties are assumed to vary continuously through the thickness direction. The fundamental relations, the modified Donnell type non-linear stability and compatibility equations of the FGM truncated conical shell resting on the Pasternak-type elastic foundation are derived. By using the Superposition and Galerkin methods, the non-linear stability equations for the FGM truncated conical shell is solved. Finally, influences of variations of Winkler foundation stiffness and shear subgrade modulus of the foundation, compositional profiles and shell characteristics on the dimensionless critical non-linear axial load are investigated. The present results are compared with the available data for a special case. -- Highlights: • Nonlinear buckling of FGM conical shell surrounded by elastic medium is studied. • Pasternak foundation model is used to describe the shell–foundation interaction. • Nonlinear basic equations are derived. • Problem is solved by using Superposition and Galerkin methods. • Influences of various parameters on the nonlinear critical load are investigated
Kwon Tae You
2007-05-01
Full Text Available Frameshift and nonsense mutations are common in tumors with microsatellite instability, and mRNAs from these mutated genes have premature termination codons (PTCs. Abnormal mRNAs containing PTCs are normally degraded by the nonsense-mediated mRNA decay (NMD system. However, PTCs located within 50-55 nucleotides of the last exon-exon junction are not recognized by NMD (NMD-irrelevant, and some PTC-containing mRNAs can escape from the NMD system (NMD-escape. We investigated protein expression from NMD-irrelevant and NMD-escape PTC-containing mRNAs by Western blotting and transfection assays. We demonstrated that transfection of NMD-irrelevant PTC-containing genomic DNA of MARCKS generates truncated protein. In contrast, NMD-escape PTC-containing versions of hMSH3 and TGFBR2 generate normal levels of mRNA, but do not generate detectable levels of protein. Transfection of NMD-escape mutant TGFBR2 genomic DNA failed to generate expression of truncated proteins, whereas transfection of wild-type TGFBR2 genomic DNA or mutant PTC-containing TGFBR2 cDNA generated expression of wild-type protein and truncated protein, respectively. Our findings suggest a novel mechanism of gene expression regulation for PTC-containing mRNAs in which the deleterious transcripts are regulated either by NMD or translational repression.
A truncated conical beam model for analysis of the vibration of rat whiskers.
Yan, Wenyi; Kan, Qianhua; Kergrene, Kenan; Kang, Guozheng; Feng, Xi-Qiao; Rajan, Ramesh
2013-08-09
A truncated conical beam model is developed to study the vibration behaviour of a rat whisker. Translational and rotational springs are introduced to better represent the constraint conditions at the base of the whiskers in a living rat. Dimensional analysis shows that the natural frequency of a truncated conical beam with generic spring constraints at its ends is inversely proportional to the square root of the mass density. Under all the combinations of the classical free, pinned, sliding or fixed boundary conditions of a truncated conical beam, it is proved that the natural frequency can be expressed as f = α(rb/L(2))E/ρ and the frequency coefficient α only depends on the ratio of the radii at the two ends of the beam. The natural frequencies of a representative rat whisker are predicted for two typical situations: freely whisking in air and the tip touching an object. Our numerical results show that there exists a window where the natural frequencies of a rat whisker are very sensitive to the change of the rotational constraint at the base. This finding is also confirmed by the numerical results of 18 whiskers with their data available from literature. It can be concluded that the natural frequencies of a rat whisker can be adjusted within a wide range through manipulating the constraints of the follicle on the rat base by a behaving animal. Copyright © 2013 Elsevier Ltd. All rights reserved.
Grosvenor, Anita J; Haigh, Brendan J; Dyer, Jolon M
2014-11-01
The extent to which nutritional and functional benefit is derived from proteins in food is related to its breakdown and digestion in the body after consumption. Further, detailed information about food protein truncation during digestion is critical to understanding and optimising the availability of bioactives, in controlling and limiting allergen release, and in minimising or monitoring the effects of processing and food preparation. However, tracking the complex array of products formed during the digestion of proteins is not easily accomplished using classical proteomics. We here present and develop a novel proteomic approach using isobaric labelling to mapping and tracking protein truncation and peptide release during simulated gastric digestion, using bovine lactoferrin as a model food protein. The relative abundance of related peptides was tracked throughout a digestion time course, and the effect of pasteurisation on peptide release assessed. The new approach to food digestion proteomics developed here therefore appears to be highly suitable not only for tracking the truncation and relative abundance of released peptides during gastric digestion, but also for determining the effects of protein modification on digestibility and potential bioavailability.
The lamppost model: effects of photon trapping, the bottom lamp and disc truncation
Niedźwiecki, Andrzej; Zdziarski, Andrzej A.
2018-04-01
We study the lamppost model, in which the primary X-ray sources in accreting black-hole systems are located symmetrically on the rotation axis on both sides of the black hole surrounded by an accretion disc. We show the importance of the emission of the source on the opposite side to the observer. Due to gravitational light bending, its emission can increase the direct (i.e., not re-emitted by the disc) flux by as much as an order of magnitude. This happens for near to face-on observers when the disc is even moderately truncated. For truncated discs, we also consider effects of emission of the top source gravitationally bent around the black hole. We also present results for the attenuation of the observed radiation with respect to that emitted by the lamppost as functions of the lamppost height, black-hole spin and the degree of disc truncation. This attenuation, which is due to the time dilation, gravitational redshift and the loss of photons crossing the black-hole horizon, can be as severe as by several orders of magnitude for low lamppost heights. We also consider the contribution to the observed flux due to re-emission by optically-thick matter within the innermost stable circular orbit.
Varying coefficient subdistribution regression for left-truncated semi-competing risks data.
Li, Ruosha; Peng, Limin
2014-10-01
Semi-competing risks data frequently arise in biomedical studies when time to a disease landmark event is subject to dependent censoring by death, the observation of which however is not precluded by the occurrence of the landmark event. In observational studies, the analysis of such data can be further complicated by left truncation. In this work, we study a varying co-efficient subdistribution regression model for left-truncated semi-competing risks data. Our method appropriately accounts for the specifical truncation and censoring features of the data, and moreover has the flexibility to accommodate potentially varying covariate effects. The proposed method can be easily implemented and the resulting estimators are shown to have nice asymptotic properties. We also present inference, such as Kolmogorov-Smirnov type and Cramér Von-Mises type hypothesis testing procedures for the covariate effects. Simulation studies and an application to the Denmark diabetes registry demonstrate good finite-sample performance and practical utility of the proposed method.
Razzera, Guilherme; Vernal, Javier; Baruh, Debora; Serpa, Viviane I; Tavares, Carolina; Lara, Flávio; Souza, Emanuel M; Pedrosa, Fábio O; Almeida, Fábio C L; Terenzi, Hernán; Valente, Ana Paula
2008-09-01
The Herbaspirillum seropedicae genome sequence encodes a truncated hemoglobin typical of group II (Hs-trHb1) members of this family. We show that His-tagged recombinant Hs-trHb1 is monomeric in solution, and its optical spectrum resembles those of previously reported globins. NMR analysis allowed us to assign heme substituents. All data suggest that Hs-trHb1 undergoes a transition from an aquomet form in the ferric state to a hexacoordinate low-spin form in the ferrous state. The close positions of Ser-E7, Lys-E10, Tyr-B10, and His-CD1 in the distal pocket place them as candidates for heme coordination and ligand regulation. Peroxide degradation kinetics suggests an easy access to the heme pocket, as the protein offered no protection against peroxide degradation when compared with free heme. The high solvent exposure of the heme may be due to the presence of a flexible loop in the access pocket, as suggested by a structural model obtained by using homologous globins as templates. The truncated hemoglobin described here has unique features among truncated hemoglobins and may function in the facilitation of O(2) transfer and scavenging, playing an important role in the nitrogen-fixation mechanism.
Truncating SLC5A7 mutations underlie a spectrum of dominant hereditary motor neuropathies.
Salter, Claire G; Beijer, Danique; Hardy, Holly; Barwick, Katy E S; Bower, Matthew; Mademan, Ines; De Jonghe, Peter; Deconinck, Tine; Russell, Mark A; McEntagart, Meriel M; Chioza, Barry A; Blakely, Randy D; Chilton, John K; De Bleecker, Jan; Baets, Jonathan; Baple, Emma L; Walk, David; Crosby, Andrew H
2018-04-01
To identify the genetic cause of disease in 2 previously unreported families with forms of distal hereditary motor neuropathies (dHMNs). The first family comprises individuals affected by dHMN type V, which lacks the cardinal clinical feature of vocal cord paralysis characteristic of dHMN-VII observed in the second family. Next-generation sequencing was performed on the proband of each family. Variants were annotated and filtered, initially focusing on genes associated with neuropathy. Candidate variants were further investigated and confirmed by dideoxy sequence analysis and cosegregation studies. Thorough patient phenotyping was completed, comprising clinical history, examination, and neurologic investigation. dHMNs are a heterogeneous group of peripheral motor neuron disorders characterized by length-dependent neuropathy and progressive distal limb muscle weakness and wasting. We previously reported a dominant-negative frameshift mutation located in the concluding exon of the SLC5A7 gene encoding the choline transporter (CHT), leading to protein truncation, as the likely cause of dominantly-inherited dHMN-VII in an extended UK family. In this study, our genetic studies identified distinct heterozygous frameshift mutations located in the last coding exon of SLC5A7 , predicted to result in the truncation of the CHT C-terminus, as the likely cause of the condition in each family. This study corroborates C-terminal CHT truncation as a cause of autosomal dominant dHMN, confirming upper limb predominating over lower limb involvement, and broadening the clinical spectrum arising from CHT malfunction.
A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization
Zhijun Luo
2014-01-01
Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.
A deblocking algorithm based on color psychology for display quality enhancement
Yeh, Chia-Hung; Tseng, Wen-Yu; Huang, Kai-Lin
2012-12-01
This article proposes a post-processing deblocking filter to reduce blocking effects. The proposed algorithm detects blocking effects by fusing the results of Sobel edge detector and wavelet-based edge detector. The filtering stage provides four filter modes to eliminate blocking effects at different color regions according to human color vision and color psychology analysis. Experimental results show that the proposed algorithm has better subjective and objective qualities for H.264/AVC reconstructed videos when compared to several existing methods.
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
Lorenzo Milazzo
1997-05-01
Full Text Available An ASQS(v is a particular Steiner system featuring a set of v vertices and two separate families of blocks, B and G, whose elements have a respective cardinality of 4 and 6. It has the property that any three vertices of X belong either to a B-block or to a G-block. The parameter cb is the number of common blocks in two separate ASQSs, both defined on the same set of vertices X . In this paper it is shown that cb ≤ 29 for any pair of ASQSs(12.
The Breakdown: Hillslope Sources of Channel Blocks in Bedrock Landscapes
Selander, B.; Anderson, S. P.; Rossi, M.
2017-12-01
Block delivery from hillslopes is a poorly understood process that influences bedrock channel incision rates and shapes steep terrain. Previous studies demonstrate that hillslope sediment delivery rate and grain size increases with channel downcutting rate or fracture density (Attal et al., 2015, ESurf). However, blocks that exceed the competence of the channel can inhibit incision. In Boulder Creek, a bedrock channel in the Colorado Front Range, large boulders (>1 m diameter) are most numerous in the steepest channel reaches; their distribution seems to reflect autogenic channel-hillslope feedback between incision rate and block delivery (Shobe et al., 2016, GRL). It is clear that the processes, rates of production, and delivery of large blocks from hillslopes into channels are critical to our understanding of steep terrain evolution. Fundamental questions are 1) whether block production or block delivery is rate limiting, 2) what mechanisms release blocks, and 3) how block production and transport affect slope morphology. As a first step, we map rock outcrops on the granodiorite hillslopes lining Boulder Creek within Boulder Canyon using a high resolution DEM. Our algorithm uses high ranges of curvature values in conjunction with slopes steeper than the angle of repose to quickly identify rock outcrops. We field verified mapped outcrop and sediment-mantled locations on hillslopes above and below the channel knickzone. We find a greater abundance of exposed rock outcrops on steeper hillslopes in Boulder Canyon. Additionally, we find that channel reaches with large in-channel blocks are located at the base of hillslopes with large areas of exposed bedrock, while reaches lacking large in-channel blocks tend to be at the base of predominately soil mantled and forested hillslopes. These observations support the model of block delivery and channel incision of Shobe et al. (2016, GRL). Moreover, these results highlight the conundrum of how rapid channel incision is
Adductor Canal Block versus Femoral Nerve Block and Quadriceps Strength
Jæger, Pia Therese; Nielsen, Zbigniew Jerzy Koscielniak; Henningsen, Lene Marianne
2013-01-01
: The authors hypothesized that the adductor canal block (ACB), a predominant sensory blockade, reduces quadriceps strength compared with placebo (primary endpoint, area under the curve, 0.5-6 h), but less than the femoral nerve block (FNB; secondary endpoint). Other secondary endpoints were...
Bradish, G.J. III; Reid, A.E.
1986-01-01
The central instrumentation control and data acquisition (CICADA) computer system is comprised of a functionally distributed hierarchical network of thirteen (13) 32-bit mini-computers that are the heart of the control, monitoring, data collection and data analysis for the tokamak fusion test reactor (TFTR). The CICADA system was designed with the goal of providing complete control, monitoring, and data acquisition for TFTR, which includes the acquisition and storage of 20M points of data within a five-minute shot cycle. It was realized early in the system design that in order to meet this goal an ancillary system would have to be provided to supplement the subsystem CAMAC systems that, due to the relatively slow throughput of the serial highways and the overhead of relaying data to the central facilities within a star network, would not provide the necessary throughput. The authors discuss how the block transfer system provided a means of moving data directly from the CAMAC crate to the application running on the central facility computers
Link adaptation algorithm for distributed coded transmissions in cooperative OFDMA systems
Varga, Mihaly; Badiu, Mihai Alin; Bota, Vasile
2015-01-01
This paper proposes a link adaptation algorithm for cooperative transmissions in the down-link connection of an OFDMA-based wireless system. The algorithm aims at maximizing the spectral efficiency of a relay-aided communication link, while satisfying the block error rate constraints at both...... adaptation algorithm has linear complexity with the number of available resource blocks, while still provides a very good performance, as shown by simulation results....
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Richards, Anna; van den Maagdenberg, Arn M. J. M.; Jen, Joanna C.; Kavanagh, David; Bertram, Paula; Spitzer, Dirk; Liszewski, M. Kathryn; Barilla-LaBarca, Maria-Louise; Terwindt, Gisela M.; Kasai, Yumi; McLellan, Mike; Grand, Mark Gilbert; Vanmolkot, Kaate R. J.; de Vries, Boukje; Wan, Jijun; Kane, Michael J.; Mamsa, Hafsa; Schaefer, Ruth; Stam, Anine H.; Haan, Joost; Paulus, T. V. M. de Jong; Storimans, Caroline W.; van Schooneveld, Mary J.; Oosterhuis, Jendo A.; Gschwendter, Andreas; Dichgans, Martin; Kotschet, Katya E.; Hodgkinson, Suzanne; Hardy, Todd A.; Delatycki, Martin B.; Hajj-Ali, Rula A.; Kothari, Parul H.; Nelson, Stanley F.; Frants, Rune R.; Baloh, Robert W.; Ferrari, Michel D.; Atkinson, John P.
Autosomal dominant retinal vasculopathy with cerebral leukodystrophy is a microvascular endotheliopathy with middle- age onset. In nine families, we identified heterozygous C- terminal frameshift mutations in TREX1, which encodes a 3'-5' exonuclease. These truncated proteins retain exonuclease
Li, C.
1991-01-01
A new method based on a maximal-decoupling variational principle is proposed to treat the Pauli-principle constraints for calculations of nuclear collective motion in a truncated boson space. The viability of the method is demonstrated through an application to the multipole form of boson Hamiltonians for the single-j and nondegenerate multi-j pairing interactions. While these boson Hamiltonians are Hermitian and contain only one- and two-boson terms, they are also the worst case for truncated boson-space calculations because they are not amenable to any boson truncations at all. By using auxiliary Hamiltonians optimally determined by the maximal-decoupling variational principle, however, truncations in the boson space become feasible and even yield reasonably accurate results. The method proposed here may thus be useful for doing realistic calculations of nuclear collective motion as well as for obtaining a viable interacting-boson-model type of boson Hamiltonian from the shell model
Oishi, Masayo; Chiba, Koji; Fukushima, Takashi; Tomono, Yoshiro; Suwa, Toshio
2012-01-01
In regulatory guidelines for bioequivalence (BE) assessment, the definitions of AUC for primary assessment are different in ICH countries, i.e., AUC from zero to the last sampling point (AUCall) in Japan, AUC from zero to infinity (AUCinf) or AUC from zero to the last measurable point (AUClast) in the US, and AUClast in the EU. To assure sufficient accuracy of truncated AUC for BE assessment, the ratio of truncated AUC (AUCall or AUClast) to AUCinf should be more than 80% both in Japanese and EU guidelines. We investigated how the difference in the definition of truncated AUC affects BE assessment of sustained release (SR) formulation. Our simulation result demonstrated that AUCall/AUCinf could be ≥80% despite AUClast/AUCinf being AUC affected the judgment of validity of truncated AUC for BE assessment, and AUCall could fail to detect the substantially different in vivo dissolution profile of generic drugs with SR formulation from the original drug.
Bagci, Hakan
2014-11-11
We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.
Bagci, Hakan; Pasciak, Joseph E.; Sirenko, Kostyantyn
2014-01-01
We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.
Chu, Xiaowen; Li, Bo; Chlamtac, Imrich
2002-07-01
Sparse wavelength conversion and appropriate routing and wavelength assignment (RWA) algorithms are the two key factors in improving the blocking performance in wavelength-routed all-optical networks. It has been shown that the optimal placement of a limited number of wavelength converters in an arbitrary mesh network is an NP complete problem. There have been various heuristic algorithms proposed in the literature, in which most of them assume that a static routing and random wavelength assignment RWA algorithm is employed. However, the existing work shows that fixed-alternate routing and dynamic routing RWA algorithms can achieve much better blocking performance. Our study in this paper further demonstrates that the wavelength converter placement and RWA algorithms are closely related in the sense that a well designed wavelength converter placement mechanism for a particular RWA algorithm might not work well with a different RWA algorithm. Therefore, the wavelength converter placement and the RWA have to be considered jointly. The objective of this paper is to investigate the wavelength converter placement problem under fixed-alternate routing algorithm and least-loaded routing algorithm. Under the fixed-alternate routing algorithm, we propose a heuristic algorithm called Minimum Blocking Probability First (MBPF) algorithm for wavelength converter placement. Under the least-loaded routing algorithm, we propose a heuristic converter placement algorithm called Weighted Maximum Segment Length (WMSL) algorithm. The objective of the converter placement algorithm is to minimize the overall blocking probability. Extensive simulation studies have been carried out over three typical mesh networks, including the 14-node NSFNET, 19-node EON and 38-node CTNET. We observe that the proposed algorithms not only outperform existing wavelength converter placement algorithms by a large margin, but they also can achieve almost the same performance comparing with full wavelength
OPAL Various Lead Glass Blocks
These lead glass blocks were part of a CERN detector called OPAL (one of the four experiments at the LEP particle detector). OPAL uses some 12 000 blocks of glass like this to measure particle energies in the electromagnetic calorimeter. This detector measured the energy deposited when electrons and photons were slowed down and stopped.
Writing Blocks and Tacit Knowledge.
Boice, Robert
1993-01-01
A review of the literature on writing block looks at two kinds: inability to write in a timely, fluent fashion, and reluctance by academicians to assist others in writing. Obstacles to fluent writing are outlined, four historical trends in treating blocks are discussed, and implications are examined. (MSE)
Block storage subsystem performance analysis
CERN. Geneva
2016-01-01
You feel that your service is slow because of the storage subsystem? But there are too many abstraction layers between your software and the raw block device for you to debug all this pile... Let's dive on the platters and check out how the block storage sees your I/Os! We can even figure out what those patterns are meaning.
Geography:The TIGER Line Files are feature classes and related database files (.) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Census Blocks are statistical areas bounded on all sides by visible features, such as streets, roads, streams, and railroad tracks, and/or by non visible boundaries such as city, town, township, and county limits, and short line-of-sight extensions of streets and roads. Census blocks are relatively small in area; for example, a block in a city bounded by streets. However, census blocks in remote areas are often large and irregular and may even be many square miles in area. A common misunderstanding is that data users think census blocks are used geographically to build all other census geographic areas, rather all other census geographic areas are updated and then used as the primary constraints, along with roads and water features, to delineate the tabulation blocks. As a result, all 2010 Census blocks nest within every other 2010 Census geographic area, so that Census Bureau statistical data can be tabulated at the block level and aggregated up t
Identification of target genes for wild type and truncated HMGA2 in mesenchymal stem-like cells
Henriksen, Jørn Mølgaard; Stabell, Marianne; Meza-Zepeda, Leonardo A
2010-01-01
The HMGA2 gene, coding for an architectural transcription factor involved in mesenchymal embryogenesis, is frequently deranged by translocation and/or amplification in mesenchymal tumours, generally leading to over-expression of shortened transcripts and a truncated protein.......The HMGA2 gene, coding for an architectural transcription factor involved in mesenchymal embryogenesis, is frequently deranged by translocation and/or amplification in mesenchymal tumours, generally leading to over-expression of shortened transcripts and a truncated protein....
A Derandomized Algorithm for RP-ADMM with Symmetric Gauss-Seidel Method
Xu, Jinchao; Xu, Kailai; Ye, Yinyu
2017-01-01
For multi-block alternating direction method of multipliers(ADMM), where the objective function can be decomposed into multiple block components, we show that with block symmetric Gauss-Seidel iteration, the algorithm will converge quickly. The method will apply a block symmetric Gauss-Seidel iteration in the primal update and a linear correction that can be derived in view of Richard iteration. We also establish the linear convergence rate for linear systems.
Jia, Jing; Zhang, Yu; Han, Qingbang; Jing, Xueping
2017-10-01
The research focuses on study the influence of truncations on the dispersion of wedge waves propagating along cylinder wedge with different truncations by using the laser ultrasound technique. The wedge waveguide models with different truncations were built by using finite element method (FEM). The dispersion curves were obtained by using 2D Fourier transformation method. Multiple mode wedge waves were observed, which was well agreed with the results estimated from Lagasse's empirical formula. We established cylinder wedge with radius of 3mm, 20° and 60°angle, with 0μm, 5μm, 10μm, 20μm, 30μm, 40μm, and 50μm truncations, respectively. It was found that non-ideal wedge tip caused abnormal dispersion of the mode of cylinder wedge, the modes of 20° cylinder wedge presents the characteristics of guide waves which propagating along hollow cylinder as the truncation increasing. Meanwhile, the modes of 60° cylinder wedge with truncations appears the characteristics of guide waves propagating along hollow cylinder, and its mode are observed clearly. The study can be used to evaluate and detect wedge structure.
Ganesh Ambigapathy
Full Text Available Brain-derived neurotrophic factor (BDNF has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein.
Ambigapathy, Ganesh; Zheng, Zhaoqing; Li, Wei; Keifer, Joyce
2013-01-01
Brain-derived neurotrophic factor (BDNF) has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF) are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein.
Minimum BER Receiver Filters with Block Memory for Uplink DS-CDMA Systems
Mérouane Debbah
2008-05-01
Full Text Available The problem of synchronous multiuser receiver design in the case of direct-sequence single-antenna code division multiple access (DS-CDMA uplink networks is studied over frequency selective fading channels. An exact expression for the bit error rate (BER is derived in the case of BPSK signaling. Moreover, an algorithm is proposed for finding the finite impulse response (FIR receiver filters with block memory such that the exact BER of the active users is minimized. Several properties of the minimum BER FIR filters with block memory are identified. The algorithm performance is found for scenarios with different channel qualities, spreading code lengths, receiver block memory size, near-far effects, and channel mismatch. For the BPSK constellation, the proposed FIR receiver structure with block memory has significant better BER with respect to Eb/N0 and near-far resistance than the corresponding minimum mean square error (MMSE filters with block memory.
Minimum BER Receiver Filters with Block Memory for Uplink DS-CDMA Systems
Debbah Mérouane
2008-01-01
Full Text Available Abstract The problem of synchronous multiuser receiver design in the case of direct-sequence single-antenna code division multiple access (DS-CDMA uplink networks is studied over frequency selective fading channels. An exact expression for the bit error rate (BER is derived in the case of BPSK signaling. Moreover, an algorithm is proposed for finding the finite impulse response (FIR receiver filters with block memory such that the exact BER of the active users is minimized. Several properties of the minimum BER FIR filters with block memory are identified. The algorithm performance is found for scenarios with different channel qualities, spreading code lengths, receiver block memory size, near-far effects, and channel mismatch. For the BPSK constellation, the proposed FIR receiver structure with block memory has significant better BER with respect to and near-far resistance than the corresponding minimum mean square error (MMSE filters with block memory.