Maximising information recovery from rank-order codes
Sen, B.; Furber, S.
2007-04-01
The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.
Encoding of QC-LDPC Codes of Rank Deficient Parity Matrix
Directory of Open Access Journals (Sweden)
Mohammed Kasim Mohammed Al-Haddad
2016-05-01
Full Text Available the encoding of long low density parity check (LDPC codes presents a challenge compared to its decoding. The Quasi Cyclic (QC LDPC codes offer the advantage for reducing the complexity for both encoding and decoding due to its QC structure. Most QC-LDPC codes have rank deficient parity matrix and this introduces extra complexity over the codes with full rank parity matrix. In this paper an encoding scheme of QC-LDPC codes is presented that is suitable for codes with full rank parity matrix and rank deficient parity matrx. The extra effort required by the codes with rank deficient parity matrix over the codes of full rank parity matrix is investigated.
Knowledge extraction from evolving spiking neural networks with rank order population coding.
Soltic, Snjezana; Kasabov, Nikola
2010-12-01
This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.
Energy Technology Data Exchange (ETDEWEB)
Lee, Won Woong; Lee, Jeong Ik [KAIST, Daejeon (Korea, Republic of)
2016-05-15
The existing nuclear system analysis codes such as RELAP5, TRAC, MARS and SPACE use the first-order numerical scheme in both space and time discretization. However, the first-order scheme is highly diffusive and less accurate due to the first order of truncation error. So, the numerical diffusion problem which makes the gradients to be smooth in the regions where the gradients should be steep can occur during the analysis, which often predicts less conservatively than the reality. Therefore, the first-order scheme is not always useful in many applications such as boron solute transport. RELAP7 which is an advanced nuclear reactor system safety analysis code using the second-order numerical scheme in temporal and spatial discretization is being developed by INL (Idaho National Laboratory) since 2011. Therefore, for better predictive performance of the safety of nuclear reactor systems, more accurate nuclear reactor system analysis code is needed for Korea too to follow the global trend of nuclear safety analysis. Thus, this study will evaluate the feasibility of applying the higher-order numerical scheme to the next generation nuclear system analysis code to provide the basis for the better nuclear system analysis code development. The accuracy is enhanced in the spatial second-order scheme and the numerical diffusion problem is alleviated while indicates significantly lower maximum Courant limit and the numerical dispersion issue which produces spurious oscillation and non-physical results in the higher-order scheme. If the spatial scheme is the first order scheme then the temporal second-order scheme provides almost the same result with the temporal firstorder scheme. However, when the temporal second order scheme and the spatial second-order scheme are applied together, the numerical dispersion can occur more severely. For the more in-depth study, the verification and validation of the NTS code built in MATLAB will be conducted further and expanded to handle two
Hierarchical partial order ranking
International Nuclear Information System (INIS)
Carlsen, Lars
2008-01-01
Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritisation of polluted sites is given. - Hierarchical partial order ranking of polluted sites has been developed for prioritization based on a large number of parameters
When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores
Wang, Jim Jing-Yan
2017-06-28
Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.
Generalized rank weights of reducible codes, optimal cases and related properties
DEFF Research Database (Denmark)
Martinez Peñas, Umberto
2018-01-01
in network coding. In this paper, we study their security behavior against information leakage on networks when applied as coset coding schemes, giving the following main results: 1) we give lower and upper bounds on their generalized rank weights (GRWs), which measure worst case information leakage...... to the wire tapper; 2) we find new parameters for which these codes are MRD (meaning that their first GRW is optimal) and use the previous bounds to estimate their higher GRWs; 3) we show that all linear (over the extension field) codes, whose GRWs are all optimal for fixed packet and code sizes but varying...... length are reducible codes up to rank equivalence; and 4) we show that the information leaked to a wire tapper when using reducible codes is often much less than the worst case given by their (optimal in some cases) GRWs. We conclude with some secondary related properties: conditions to be rank...
Low-Rank Sparse Coding for Image Classification
Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra
2013-01-01
In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.
Low-Rank Sparse Coding for Image Classification
Zhang, Tianzhu
2013-12-01
In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.
Frames for exact inversion of the rank order coder.
Masmoudi, Khaled; Antonini, Marc; Kornprobst, Pierre
2012-02-01
Our goal is to revisit rank order coding by proposing an original exact decoding procedure for it. Rank order coding was proposed by Thorpe et al. who stated that the order in which the retina cells are activated encodes for the visual stimulus. Based on this idea, the authors proposed in [1] a rank order coder/decoder associated to a retinal model. Though, it appeared that the decoding procedure employed yields reconstruction errors that limit the model bit-cost/quality performances when used as an image codec. The attempts made in the literature to overcome this issue are time consuming and alter the coding procedure, or are lacking mathematical support and feasibility for standard size images. Here we solve this problem in an original fashion by using the frames theory, where a frame of a vector space designates an extension for the notion of basis. Our contribution is twofold. First, we prove that the analyzing filter bank considered is a frame, and then we define the corresponding dual frame that is necessary for the exact image reconstruction. Second, to deal with the problem of memory overhead, we design a recursive out-of-core blockwise algorithm for the computation of this dual frame. Our work provides a mathematical formalism for the retinal model under study and defines a simple and exact reverse transform for it with over than 265 dB of increase in the peak signal-to-noise ratio quality compared to [1]. Furthermore, the framework presented here can be extended to several models of the visual cortical areas using redundant representations.
When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores
Wang, Jim Jing-Yan; Cui, Xuefeng; Yu, Ge; Guo, Lili; Gao, Xin
2017-01-01
Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays
A Model-Free Scheme for Meme Ranking in Social Media.
He, Saike; Zheng, Xiaolong; Zeng, Daniel
2016-01-01
The prevalence of social media has greatly catalyzed the dissemination and proliferation of online memes (e.g., ideas, topics, melodies, tags, etc.). However, this information abundance is exceeding the capability of online users to consume it. Ranking memes based on their popularities could promote online advertisement and content distribution. Despite such importance, few existing work can solve this problem well. They are either daunted by unpractical assumptions or incapability of characterizing dynamic information. As such, in this paper, we elaborate a model-free scheme to rank online memes in the context of social media. This scheme is capable to characterize the nonlinear interactions of online users, which mark the process of meme diffusion. Empirical studies on two large-scale, real-world datasets (one in English and one in Chinese) demonstrate the effectiveness and robustness of the proposed scheme. In addition, due to its fine-grained modeling of user dynamics, this ranking scheme can also be utilized to explain meme popularity through the lens of social influence.
International Nuclear Information System (INIS)
Liu Yueqiang; Albanese, R.; Rubinacci, G.; Portone, A.; Villone, F.
2008-01-01
In order to model a magnetohydrodynamic (MHD) instability that strongly couples to external conducting structures (walls and/or coils) in a fusion device, it is often necessary to combine a MHD code solving for the plasma response, with an eddy current code computing the fields and currents of conductors. We present a rigorous proof of the coupling schemes between these two types of codes. One of the coupling schemes has been introduced and implemented in the CARMA code [R. Albanese, Y. Q. Liu, A. Portone, G. Rubinacci, and F. Villone, IEEE Trans. Magn. 44, 1654 (2008); A. Portone, F. Villone, Y. Q. Liu, R. Albanese, and G. Rubinacci, Plasma Phys. Controlled Fusion 50, 085004 (2008)] that couples the MHD code MARS-F[Y. Q. Liu, A. Bondeson, C. M. Fransson, B. Lennartson, and C. Breitholtz, Phys. Plasmas 7, 3681 (2000)] and the eddy current code CARIDDI[R. Albanese and G. Rubinacci, Adv. Imaging Electron Phys. 102, 1 (1998)]. While the coupling schemes are described for a general toroidal geometry, we give the analytical proof for a cylindrical plasma.
A Model-Free Scheme for Meme Ranking in Social Media
He, Saike; Zheng, Xiaolong; Zeng, Daniel
2015-01-01
The prevalence of social media has greatly catalyzed the dissemination and proliferation of online memes (e.g., ideas, topics, melodies, tags, etc.). However, this information abundance is exceeding the capability of online users to consume it. Ranking memes based on their popularities could promote online advertisement and content distribution. Despite such importance, few existing work can solve this problem well. They are either daunted by unpractical assumptions or incapability of characterizing dynamic information. As such, in this paper, we elaborate a model-free scheme to rank online memes in the context of social media. This scheme is capable to characterize the nonlinear interactions of online users, which mark the process of meme diffusion. Empirical studies on two large-scale, real-world datasets (one in English and one in Chinese) demonstrate the effectiveness and robustness of the proposed scheme. In addition, due to its fine-grained modeling of user dynamics, this ranking scheme can also be utilized to explain meme popularity through the lens of social influence. PMID:26823638
Contests with rank-order spillovers
M.R. Baye (Michael); D. Kovenock (Dan); C.G. de Vries (Casper)
2012-01-01
textabstractThis paper presents a unified framework for characterizing symmetric equilibrium in simultaneous move, two-player, rank-order contests with complete information, in which each player's strategy generates direct or indirect affine "spillover" effects that depend on the rank-order of her
Convolutional Codes with Maximum Column Sum Rank for Network Streaming
Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish
2015-01-01
The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...
High order scheme for the non-local transport in ICF plasmas
Energy Technology Data Exchange (ETDEWEB)
Feugeas, J.L.; Nicolai, Ph.; Schurtz, G. [Bordeaux-1 Univ., Centre Lasers Intenses et Applications (UMR 5107), 33 - Talence (France); Charrier, P.; Ahusborde, E. [Bordeaux-1 Univ., MAB, 33 - Talence (France)
2006-06-15
A high order practical scheme for a model of non-local transport is here proposed to be used in multidimensional radiation hydrodynamic codes. A high order scheme is necessary to solve non-local problems on strongly deformed meshes that are on hot point or ablation front zones. It is shown that the errors made by a classical 5 point scheme on a disturbed grid can be of the same order of magnitude as the non-local effects. The use of a 9 point scheme in a simulation of inertial confinement fusion appears to be essential.
LDPC-PPM Coding Scheme for Optical Communication
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
Directory of Open Access Journals (Sweden)
Jian Jiao
2017-11-01
Full Text Available In this paper, we propose a rate-compatible (RC parallel concatenated punctured polar (PCPP codes for incremental redundancy hybrid automatic repeat request (IR-HARQ transmission schemes, which can transmit multiple data blocks over a time-varying channel. The PCPP coding scheme can provide RC polar coding blocks in order to adapt to channel variations. First, we investigate an improved random puncturing (IRP pattern for the PCPP coding scheme due to the code-rate and block length limitations of conventional polar codes. The proposed IRP algorithm only select puncturing bits from the frozen bits set and keep the information bits unchanged during puncturing, which can improve 0.2–1 dB decoding performance more than the existing random puncturing (RP algorithm. Then, we develop a RC IR-HARQ transmission scheme based on PCPP codes. By analyzing the overhead of the previous successful decoded PCPP coding block in our IR-HARQ scheme, the optimal initial code-rate can be determined for each new PCPP coding block over time-varying channels. Simulation results show that the average number of transmissions is about 1.8 times for each PCPP coding block in our RC IR-HARQ scheme with a 2-level PCPP encoding construction, which can reduce half of the average number of transmissions than the existing RC polar coding schemes.
Developing a Coding Scheme to Analyse Creativity in Highly-constrained Design Activities
DEFF Research Database (Denmark)
Dekoninck, Elies; Yue, Huang; Howard, Thomas J.
2010-01-01
This work is part of a larger project which aims to investigate the nature of creativity and the effectiveness of creativity tools in highly-constrained design tasks. This paper presents the research where a coding scheme was developed and tested with a designer-researcher who conducted two rounds...... of design and analysis on a highly constrained design task. This paper shows how design changes can be coded using a scheme based on creative ‘modes of change’. The coding scheme can show the way a designer moves around the design space, and particularly the strategies that are used by a creative designer...... larger study with more designers working on different types of highly-constrained design task is needed, in order to draw conclusions on the modes of change and their relationship to creativity....
Wang, Jinling; Belatreche, Ammar; Maguire, Liam P; McGinnity, Thomas Martin
2017-01-01
This paper presents an enhanced rank-order-based learning algorithm, called SpikeTemp, for spiking neural networks (SNNs) with a dynamically adaptive structure. The trained feed-forward SNN consists of two layers of spiking neurons: 1) an encoding layer which temporally encodes real-valued features into spatio-temporal spike patterns and 2) an output layer of dynamically grown neurons which perform spatio-temporal classification. Both Gaussian receptive fields and square cosine population encoding schemes are employed to encode real-valued features into spatio-temporal spike patterns. Unlike the rank-order-based learning approach, SpikeTemp uses the precise times of the incoming spikes for adjusting the synaptic weights such that early spikes result in a large weight change and late spikes lead to a smaller weight change. This removes the need to rank all the incoming spikes and, thus, reduces the computational cost of SpikeTemp. The proposed SpikeTemp algorithm is demonstrated on several benchmark data sets and on an image recognition task. The results show that SpikeTemp can achieve better classification performance and is much faster than the existing rank-order-based learning approach. In addition, the number of output neurons is much smaller when the square cosine encoding scheme is employed. Furthermore, SpikeTemp is benchmarked against a selection of existing machine learning algorithms, and the results demonstrate the ability of SpikeTemp to classify different data sets after just one presentation of the training samples with comparable classification performance.
Low rank approach to computing first and higher order derivatives using automatic differentiation
International Nuclear Information System (INIS)
Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.
2012-01-01
This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computing derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
Computational Aero-Acoustic Using High-order Finite-Difference Schemes
DEFF Research Database (Denmark)
Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær
2007-01-01
are solved using the in-house flow solver EllipSys2D/3D which is a second-order finite volume code. The acoustic solution is found by solving the acoustic equations using high-order finite difference schemes. The incompressible flow equations and the acoustic equations are solved at the same time levels......In this paper, a high-order technique to accurately predict flow-generated noise is introduced. The technique consists of solving the viscous incompressible flow equations and inviscid acoustic equations using a incompressible/compressible splitting technique. The incompressible flow equations...
Energy Technology Data Exchange (ETDEWEB)
Park, Ju Yeop; In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok [Korea Atomic Energy Research Institute, Taejeon (Korea)
2000-02-01
The development of orthogonal 2-dimensional numerical code is made. The present code contains 9 kinds of turbulence models that are widely used. They include a standard k-{epsilon} model and 8 kinds of low Reynolds number ones. They also include 6 kinds of numerical schemes including 5 kinds of low order schemes and 1 kind of high order scheme such as QUICK. To verify the present numerical code, pipe flow, channel flow and expansion pipe flow are solved by this code with various options of turbulence models and numerical schemes and the calculated outputs are compared to experimental data. Furthermore, the discretization error that originates from the use of standard k-{epsilon} turbulence model with wall function is much more diminished by introducing a new grid system than a conventional one in the present code. 23 refs., 58 figs., 6 tabs. (Author)
A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.
Bobka, Marilyn E.; Subramaniam, J.B.
The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…
Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne
2016-01-01
How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.
photon-plasma: A modern high-order particle-in-cell code
International Nuclear Information System (INIS)
Haugbølle, Troels; Frederiksen, Jacob Trier; Nordlund, Åke
2013-01-01
We present the photon-plasma code, a modern high order charge conserving particle-in-cell code for simulating relativistic plasmas. The code is using a high order implicit field solver and a novel high order charge conserving interpolation scheme for particle-to-cell interpolation and charge deposition. It includes powerful diagnostics tools with on-the-fly particle tracking, synthetic spectra integration, 2D volume slicing, and a new method to correctly account for radiative cooling in the simulations. A robust technique for imposing (time-dependent) particle and field fluxes on the boundaries is also presented. Using a hybrid OpenMP and MPI approach, the code scales efficiently from 8 to more than 250.000 cores with almost linear weak scaling on a range of architectures. The code is tested with the classical benchmarks particle heating, cold beam instability, and two-stream instability. We also present particle-in-cell simulations of the Kelvin-Helmholtz instability, and new results on radiative collisionless shocks
Nuclear Reactor Component Code CUPID-I: Numerical Scheme and Preliminary Assessment Results
International Nuclear Information System (INIS)
Cho, Hyoung Kyu; Jeong, Jae Jun; Park, Ik Kyu; Kim, Jong Tae; Yoon, Han Young
2007-12-01
A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components of a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAC, semi-implicit ICE, SIMPLE, Row Scheme and so on. Among them, the ICE scheme for the three-field model was presented in the present report. The CUPID code is utilizing unstructured mesh for the simulation of complicated geometries of the nuclear reactor components. The conventional ICE scheme that was applied to RELAP5 and COBRA-TF, therefore, were modified for the application to the unstructured mesh. Preliminary calculations for the unstructured semi-implicit ICE scheme have been conducted for a verification of the numerical method from a qualitative point of view. The preliminary calculation results showed that the present numerical scheme is robust and efficient for the prediction of phase changes and flow transitions due to a boiling and a flashing. These calculation results also showed the strong coupling between the pressure and void fraction changes. Thus, it is believed that the semi-implicit ICE scheme can be utilized for transient two-phase flows in a component of a nuclear reactor
Nuclear Reactor Component Code CUPID-I: Numerical Scheme and Preliminary Assessment Results
Energy Technology Data Exchange (ETDEWEB)
Cho, Hyoung Kyu; Jeong, Jae Jun; Park, Ik Kyu; Kim, Jong Tae; Yoon, Han Young
2007-12-15
A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components of a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAC, semi-implicit ICE, SIMPLE, Row Scheme and so on. Among them, the ICE scheme for the three-field model was presented in the present report. The CUPID code is utilizing unstructured mesh for the simulation of complicated geometries of the nuclear reactor components. The conventional ICE scheme that was applied to RELAP5 and COBRA-TF, therefore, were modified for the application to the unstructured mesh. Preliminary calculations for the unstructured semi-implicit ICE scheme have been conducted for a verification of the numerical method from a qualitative point of view. The preliminary calculation results showed that the present numerical scheme is robust and efficient for the prediction of phase changes and flow transitions due to a boiling and a flashing. These calculation results also showed the strong coupling between the pressure and void fraction changes. Thus, it is believed that the semi-implicit ICE scheme can be utilized for transient two-phase flows in a component of a nuclear reactor.
Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes
Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua
2012-01-01
This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...
Peng, Miao; Chen, Ming; Zhou, Hui; Wan, Qiuzhen; Jiang, LeYong; Yang, Lin; Zheng, Zhiwei; Chen, Lin
2018-01-01
High peak-to-average power ratio (PAPR) of the transmit signal is a major drawback in optical orthogonal frequency division multiplexing (OOFDM) system. In this paper, we propose and experimentally demonstrate a novel hybrid scheme, combined the Huffman coding and Discrete Fourier Transmission-Spread (DFT-spread), in order to reduce high PAPR in a 16-QAM short-reach intensity-modulated and direct-detection OOFDM (IMDD-OOFDM) system. The experimental results demonstrated that the hybrid scheme can reduce the PAPR by about 1.5, 2, 3 and 6 dB, and achieve 1.5, 1, 2.5 and 3 dB receiver sensitivity improvement compared to clipping, DFT-spread and Huffman coding and original OFDM signals, respectively, at an error vector magnitude (EVM) of -10 dB after transmission over 20 km standard single-mode fiber (SSMF). Furthermore, the throughput gain can be of the order of 30% by using the hybrid scheme compared with the cases of without applying the Huffman coding.
Determination of Solution Accuracy of Numerical Schemes as Part of Code and Calculation Verification
Energy Technology Data Exchange (ETDEWEB)
Blottner, F.G.; Lopez, A.R.
1998-10-01
This investigation is concerned with the accuracy of numerical schemes for solving partial differential equations used in science and engineering simulation codes. Richardson extrapolation methods for steady and unsteady problems with structured meshes are presented as part of the verification procedure to determine code and calculation accuracy. The local truncation error de- termination of a numerical difference scheme is shown to be a significant component of the veri- fication procedure as it determines the consistency of the numerical scheme, the order of the numerical scheme, and the restrictions on the mesh variation with a non-uniform mesh. Genera- tion of a series of co-located, refined meshes with the appropriate variation of mesh cell size is in- vestigated and is another important component of the verification procedure. The importance of mesh refinement studies is shown to be more significant than just a procedure to determine solu- tion accuracy. It is suggested that mesh refinement techniques can be developed to determine con- sistency of numerical schemes and to determine if governing equations are well posed. The present investigation provides further insight into the conditions and procedures required to effec- tively use Richardson extrapolation with mesh refinement studies to achieve confidence that sim- ulation codes are producing accurate numerical solutions.
A second-order iterative implicit-explicit hybrid scheme for hyperbolic systems of conservation laws
International Nuclear Information System (INIS)
Dai, Wenlong; Woodward, P.R.
1996-01-01
An iterative implicit-explicit hybrid scheme is proposed for hyperbolic systems of conservation laws. Each wave in a system may be implicitly, or explicitly, or partially implicitly and partially explicitly treated depending on its associated Courant number in each numerical cell, and the scheme is able to smoothly switch between implicit and explicit calculations. The scheme is of Godunov-type in both explicit and implicit regimes, is in a strict conservation form, and is accurate to second-order in both space and time for all Courant numbers. The computer code for the scheme is easy to vectorize. Multicolors proposed in this paper may reduce the number of iterations required to reach a converged solution by several orders for a large time step. The feature of the scheme is shown through numerical examples. 38 refs., 12 figs
Probability of undetected error after decoding for a concatenated coding scheme
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
Eltaif, Tawfig; Shalaby, Hossam M. H.; Shaari, Sahbudin; Hamarsheh, Mohammad M. N.
2009-04-01
A successive interference cancellation scheme is applied to optical code-division multiple-access (OCDMA) systems with spectral amplitude coding (SAC). A detailed analysis of this system, with Hadamard codes used as signature sequences, is presented. The system can easily remove the effect of the strongest signal at each stage of the cancellation process. In addition, simulation of the prose system is performed in order to validate the theoretical results. The system shows a small bit error rate at a large number of active users compared to the SAC OCDMA system. Our results reveal that the proposed system is efficient in eliminating the effect of the multiple-user interference and in the enhancement of the overall performance.
Importance biasing scheme implemented in the PRIZMA code
International Nuclear Information System (INIS)
Kandiev, I.Z.; Malyshkin, G.N.
1997-01-01
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities
Developing and modifying behavioral coding schemes in pediatric psychology: a practical guide.
Chorney, Jill MacLaren; McMurtry, C Meghan; Chambers, Christine T; Bakeman, Roger
2015-01-01
To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Investigation of coupling scheme for neutronic and thermal-hydraulic codes
International Nuclear Information System (INIS)
Wang Guoli; Yu Jianfeng; Pen Muzhang; Zhang Yuman.
1988-01-01
Recently, a number of coupled neutronics/thermal-hydraulics codes have been used in reaction design and safty analysis, which have been obtained by coupling previous neutronic and thermal-hydraulic codes. The different coupling schemes affect computer time and accuracy of calculation results. Numberical experiments of several different coupling schemes and some heuristic results are described
Directory of Open Access Journals (Sweden)
Zhilu Wu
2015-01-01
Full Text Available Interference alignment (IA has been put forward as a promising technique which can mitigate interference and effectively increase the throughput of wireless sensor networks (WSNs. However, the number of users is strictly restricted by the IA feasibility condition, and the interference leakage will become so strong that the quality of service will degrade significantly when there are more users than that IA can support. In this paper, a novel joint spatial-code clustered (JSCC-IA scheme is proposed to solve this problem. In the proposed scheme, the users are clustered into several groups so that feasible IA can be achieved within each group. In addition, each group is assigned a pseudo noise (PN code in order to suppress the inter-group interference via the code dimension. The analytical bit error rate (BER expressions of the proposed JSCC-IA scheme are formulated for the systems with identical and different propagation delays, respectively. To further improve the performance of the JSCC-IA scheme in asymmetric networks, a random grouping selection (RGS algorithm is developed to search for better grouping combinations. Numerical results demonstrate that the proposed JSCC-IA scheme is capable of accommodating many more users to communicate simultaneously in the same frequency band with better performance.
Multiple Schemes for Mobile Payment Authentication Using QR Code and Visual Cryptography
Directory of Open Access Journals (Sweden)
Jianfeng Lu
2017-01-01
Full Text Available QR code (quick response code is used due to its beneficial properties, especially in the mobile payment field. However, there exists an inevitable risk in the transaction process. It is not easily perceived that the attacker tampers with or replaces the QR code that contains merchant’s beneficiary account. Thus, it is of great urgency to conduct authentication of QR code. In this study, we propose a novel mechanism based on visual cryptography scheme (VCS and aesthetic QR code, which contains three primary schemes for different concealment levels. The main steps of these schemes are as follows. Firstly, one original QR code is split into two shadows using VC multiple rules; secondly, the two shadows are embedded into the same background image, respectively, and the embedded results are fused with the same carrier QR code, respectively, using XOR mechanism of RS and QR code error correction mechanism. Finally, the two aesthetic QR codes can be stacked precisely and the original QR code is restored according to the defined VCS. Experiments corresponding to three proposed schemes are conducted and demonstrate the feasibility and security of the mobile payment authentication, the significant improvement of the concealment for the shadows in QR code, and the diversity of mobile payment authentication.
Content Analysis Coding Schemes for Online Asynchronous Discussion
Weltzer-Ward, Lisa
2011-01-01
Purpose: Researchers commonly utilize coding-based analysis of classroom asynchronous discussion contributions as part of studies of online learning and instruction. However, this analysis is inconsistent from study to study with over 50 coding schemes and procedures applied in the last eight years. The aim of this article is to provide a basis…
A new two-code keying scheme for SAC-OCDMA systems enabling bipolar encoding
Al-Khafaji, Hamza M. R.; Ngah, Razali; Aljunid, S. A.; Rahman, T. A.
2015-03-01
In this paper, we propose a new two-code keying scheme for enabling bipolar encoding in a high-rate spectral-amplitude coding optical code-division multiple-access (SAC-OCDMA) system. The mathematical formulations are derived for the signal-to-noise ratio and bit-error rate (BER) of SAC-OCDMA system based on the suggested scheme using multi-diagonal (MD) code. Performance analyses are assessed considering the effects of phase-induced intensity noise, as well as shot and thermal noises in photodetectors. The numerical results demonstrated that the proposed scheme exhibits an enhanced BER performance compared to the existing unipolar encoding with direct detection technique. Furthermore, the performance improvement afforded by this scheme is verified using simulation experiments.
Robust second-order scheme for multi-phase flow computations
Shahbazi, Khosro
2017-06-01
A robust high-order scheme for the multi-phase flow computations featuring jumps and discontinuities due to shock waves and phase interfaces is presented. The scheme is based on high-order weighted-essentially non-oscillatory (WENO) finite volume schemes and high-order limiters to ensure the maximum principle or positivity of the various field variables including the density, pressure, and order parameters identifying each phase. The two-phase flow model considered besides the Euler equations of gas dynamics consists of advection of two parameters of the stiffened-gas equation of states, characterizing each phase. The design of the high-order limiter is guided by the findings of Zhang and Shu (2011) [36], and is based on limiting the quadrature values of the density, pressure and order parameters reconstructed using a high-order WENO scheme. The proof of positivity-preserving and accuracy is given, and the convergence and the robustness of the scheme are illustrated using the smooth isentropic vortex problem with very small density and pressure. The effectiveness and robustness of the scheme in computing the challenging problem of shock wave interaction with a cluster of tightly packed air or helium bubbles placed in a body of liquid water is also demonstrated. The superior performance of the high-order schemes over the first-order Lax-Friedrichs scheme for computations of shock-bubble interaction is also shown. The scheme is implemented in two-dimensional space on parallel computers using message passing interface (MPI). The proposed scheme with limiter features approximately 50% higher number of inter-processor message communications compared to the corresponding scheme without limiter, but with only 10% higher total CPU time. The scheme is provably second-order accurate in regions requiring positivity enforcement and higher order in the rest of domain.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
Locally decodable codes and private information retrieval schemes
Yekhanin, Sergey
2010-01-01
Locally decodable codes (LDCs) are codes that simultaneously provide efficient random access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary bit of a message by looking at only a small number of randomly chosen codeword bits. Local decodability comes with a certain loss in terms of efficiency - specifically, locally decodable codes require longer codeword lengths than their classical counterparts. Private information retrieval (PIR) schemes are cryptographic protocols designed to safeguard the privacy of database users. They allow clients to retrieve rec
Validation of a RANS transition model using a high-order weighted compact nonlinear scheme
Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang
2013-04-01
A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.
Rank-ordered multifractal analysis for intermittent fluctuations with global crossover behavior
International Nuclear Information System (INIS)
Tam, Sunny W. Y.; Chang, Tom; Kintner, Paul M.; Klatt, Eric M.
2010-01-01
The rank-ordered multifractal analysis (ROMA), a recently developed technique that combines the ideas of parametric rank ordering and one-parameter scaling of monofractals, has the capabilities of deciphering the multifractal characteristics of intermittent fluctuations. The method allows one to understand the multifractal properties through rank-ordered scaling or nonscaling parametric variables. The idea of the ROMA technique is applied to analyze the multifractal characteristics of the auroral zone electric-field fluctuations observed by the SIERRA sounding rocket. The observed fluctuations span across contiguous multiple regimes of scales with different multifractal characteristics. We extend the ROMA technique such that it can take into account the crossover behavior - with the possibility of collapsing probability distribution functions - over these contiguous regimes.
A Study on Architecture of Malicious Code Blocking Scheme with White List in Smartphone Environment
Lee, Kijeong; Tolentino, Randy S.; Park, Gil-Cheol; Kim, Yong-Tae
Recently, the interest and demands for mobile communications are growing so fast because of the increasing prevalence of smartphones around the world. In addition, the existing feature phones were replaced by smartphones and it has widely improved while using the explosive growth of Internet users using smartphones, e-commerce enabled Internet banking transactions and the importance of protecting personal information. Therefore, the development of smartphones antivirus products was developed and launched in order to prevent malicious code or virus infection. In this paper, we proposed a new scheme to protect the smartphone from malicious codes and malicious applications that are element of security threats in mobile environment and to prevent information leakage from malicious code infection. The proposed scheme is based on the white list smartphone application which only allows installing authorized applications and to prevent the installation of malicious and untrusted mobile applications which can possibly infect the applications and programs of smartphones.
Efficient coding schemes with power allocation using space-time-frequency spreading
Institute of Scientific and Technical Information of China (English)
Jiang Haining; Luo Hanwen; Tian Jifeng; Song Wentao; Liu Xingzhao
2006-01-01
An efficient space-time-frequency (STF) coding strategy for multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is presented for high bit rate data transmission over frequency selective fading channels. The proposed scheme is a new approach to space-time-frequency coded OFDM (COFDM) that combines OFDM with space-time coding, linear precoding and adaptive power allocation to provide higher quality of transmission in terms of the bit error rate performance and power efficiency. In addition to exploiting the maximum diversity gain in frequency, time and space, the proposed scheme enjoys high coding advantages and low-complexity decoding. The significant performance improvement of our design is confirmed by corroborating numerical simulations.
Delay-Aware Program Codes Dissemination Scheme in Internet of Everything
Directory of Open Access Journals (Sweden)
Yixuan Xu
2016-01-01
Full Text Available Due to recent advancements in big data, connection technologies, and smart devices, our environment is transforming into an “Internet of Everything” (IoE environment. These smart devices can obtain new or special functions by reprogramming: upgrade their soft systems through receiving new version of program codes. However, bulk codes dissemination suffers from large delay, energy consumption, and number of retransmissions because of the unreliability of wireless links. In this paper, a delay-aware program dissemination (DAPD scheme is proposed to disseminate program codes with fast, reliable, and energy-efficient style. We observe that although total energy is limited in wireless sensor network, there exists residual energy in nodes deployed far from the base station. Therefore, DAPD scheme improves the performance of bulk codes dissemination through the following two aspects. (1 Due to the fact that a high transmitting power can significantly improve the quality of wireless links, transmitting power of sensors with more residual energy is enhanced to improve link quality. (2 Due to the fact that performance of correlated dissemination tends to degrade in a highly dynamic environment, link correlation is autonomously updated in DAPD during codes dissemination to maintain improvements brought by correlated dissemination. Theoretical analysis and experimental results show that, compared with previous work, DAPD scheme improves the dissemination performance in terms of completion time, transmission cost, and the efficiency of energy utilization.
A novel chaotic encryption scheme based on arithmetic coding
International Nuclear Information System (INIS)
Mi Bo; Liao Xiaofeng; Chen Yong
2008-01-01
In this paper, under the combination of arithmetic coding and logistic map, a novel chaotic encryption scheme is presented. The plaintexts are encrypted and compressed by using an arithmetic coder whose mapping intervals are changed irregularly according to a keystream derived from chaotic map and plaintext. Performance and security of the scheme are also studied experimentally and theoretically in detail
An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model
Directory of Open Access Journals (Sweden)
Guomin Zhou
2017-01-01
Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.
An efficient chaotic source coding scheme with variable-length blocks
International Nuclear Information System (INIS)
Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong
2011-01-01
An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)
Alvarez-Martinez, R.; Martinez-Mekler, G.; Cocho, G.
2011-01-01
The behavior of rank-ordered distributions of phenomena present in a variety of fields such as biology, sociology, linguistics, finance and geophysics has been a matter of intense research. Often power laws have been encountered; however, their validity tends to hold mainly for an intermediate range of rank values. In a recent publication (Martínez-Mekler et al., 2009 [7]), a generalization of the functional form of the beta distribution has been shown to give excellent fits for many systems of very diverse nature, valid for the whole range of rank values, regardless of whether or not a power law behavior has been previously suggested. Here we give some insight on the significance of the two free parameters which appear as exponents in the functional form, by looking into discrete probabilistic branching processes with conflicting dynamics. We analyze a variety of realizations of these so-called expansion-modification models first introduced by Wentian Li (1989) [10]. We focus our attention on an order-disorder transition we encounter as we vary the modification probability p. We characterize this transition by means of the fitting parameters. Our numerical studies show that one of the fitting exponents is related to the presence of long-range correlations exhibited by power spectrum scale invariance, while the other registers the effect of disordering elements leading to a breakdown of these properties. In the absence of long-range correlations, this parameter is sensitive to the occurrence of unlikely events. We also introduce an approximate calculation scheme that relates this dynamics to multinomial multiplicative processes. A better understanding through these models of the meaning of the generalized beta-fitting exponents may contribute to their potential for identifying and characterizing universality classes.
Ranking Based Locality Sensitive Hashing Enabled Cancelable Biometrics: Index-of-Max Hashing
Jin, Zhe; Lai, Yen-Lung; Hwang, Jung-Yeon; Kim, Soohyung; Teoh, Andrew Beng Jin
2017-01-01
In this paper, we propose a ranking based locality sensitive hashing inspired two-factor cancelable biometrics, dubbed "Index-of-Max" (IoM) hashing for biometric template protection. With externally generated random parameters, IoM hashing transforms a real-valued biometric feature vector into discrete index (max ranked) hashed code. We demonstrate two realizations from IoM hashing notion, namely Gaussian Random Projection based and Uniformly Random Permutation based hashing schemes. The disc...
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
Mazaheri, Alireza; Nishikawa, Hiroaki
2015-01-01
In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.
High Order Semi-Lagrangian Advection Scheme
Malaga, Carlos; Mandujano, Francisco; Becerra, Julian
2014-11-01
In most fluid phenomena, advection plays an important roll. A numerical scheme capable of making quantitative predictions and simulations must compute correctly the advection terms appearing in the equations governing fluid flow. Here we present a high order forward semi-Lagrangian numerical scheme specifically tailored to compute material derivatives. The scheme relies on the geometrical interpretation of material derivatives to compute the time evolution of fields on grids that deform with the material fluid domain, an interpolating procedure of arbitrary order that preserves the moments of the interpolated distributions, and a nonlinear mapping strategy to perform interpolations between undeformed and deformed grids. Additionally, a discontinuity criterion was implemented to deal with discontinuous fields and shocks. Tests of pure advection, shock formation and nonlinear phenomena are presented to show performance and convergence of the scheme. The high computational cost is considerably reduced when implemented on massively parallel architectures found in graphic cards. The authors acknowledge funding from Fondo Sectorial CONACYT-SENER Grant Number 42536 (DGAJ-SPI-34-170412-217).
Novel UEP LT Coding Scheme with Feedback Based on Different Degree Distributions
Directory of Open Access Journals (Sweden)
Li Ya-Fang
2016-01-01
Full Text Available Traditional unequal error protection (UEP schemes have some limitations and problems, such as the poor UEP performance of high priority data and the seriously sacrifice of low priority data in decoding property. Based on the reasonable applications of different degree distributions in LT codes, this paper puts forward a novel UEP LT coding scheme with a simple feedback to compile these data packets separately. Simulation results show that the proposed scheme can effectively protect high priority data, and improve the transmission efficiency of low priority data from 2.9% to 22.3%. Furthermore, it is fairly suitable to apply this novel scheme to multicast and broadcast environments since only a simple feedback introduced.
Evaluation of three coding schemes designed for improved data communication
Snelsire, R. W.
1974-01-01
Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.
Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen
2016-09-01
In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.
Gilmore-Bykovskyi, Andrea L
2015-01-01
Mealtime behavioral symptoms are distressing and frequently interrupt eating for the individual experiencing them and others in the environment. A computer-assisted coding scheme was developed to measure caregiver person-centeredness and behavioral symptoms for nursing home residents with dementia during mealtime interactions. The purpose of this pilot study was to determine the feasibility, ease of use, and inter-observer reliability of the coding scheme, and to explore the clinical utility of the coding scheme. Trained observers coded 22 observations. Data collection procedures were acceptable to participants. Overall, the coding scheme proved to be feasible, easy to execute and yielded good to very good inter-observer agreement following observer re-training. The coding scheme captured clinically relevant, modifiable antecedents to mealtime behavioral symptoms, but would be enhanced by the inclusion of measures for resident engagement and consolidation of items for measuring caregiver person-centeredness that co-occurred and were difficult for observers to distinguish. Published by Elsevier Inc.
Directory of Open Access Journals (Sweden)
H. Prashantha Kumar
2011-09-01
Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.
Coded-subcarrier-aided chromatic dispersion monitoring scheme for flexible optical OFDM networks.
Tse, Kam-Hon; Chan, Chun-Kit
2014-08-11
A simple coded-subcarrier aided scheme is proposed to perform chromatic dispersion monitoring in flexible optical OFDM networks. A pair of coded label subcarriers is added to both edges of the optical OFDM signal spectrum at the edge transmitter node. Upon reception at any intermediate or the receiver node, chromatic dispersion estimation is performed, via simple direct detection, followed by electronic correlation procedures with the designated code sequences. The feasibility and the performance of the proposed scheme have been experimentally characterized. It provides a cost-effective monitoring solution for the optical OFDM signals across intermediate nodes in flexible OFDM networks.
A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. Eduard, E-mail: J.E.Hoogenboom@tudelft.nl [Delft University of Technology (Netherlands); Ivanov, Aleksandar; Sanchez, Victor, E-mail: Aleksandar.Ivanov@kit.edu, E-mail: Victor.Sanchez@kit.edu [Karlsruhe Institute of Technology, Institute of Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen (Germany); Diop, Cheikh, E-mail: Cheikh.Diop@cea.fr [CEA/DEN/DANS/DM2S/SERMA, Commissariat a l' Energie Atomique, Gif-sur-Yvette (France)
2011-07-01
A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)
A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes
International Nuclear Information System (INIS)
Hoogenboom, J. Eduard; Ivanov, Aleksandar; Sanchez, Victor; Diop, Cheikh
2011-01-01
A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)
Examination Malpractice in Nigeria: Rank-ordering the Types ...
African Journals Online (AJOL)
Although 'giraffing' and carrying of prepared materials into the examination hall were the most common forms of examination malpractice, bribery (ranked 4.5) was the anchor. Students, peer group and parents were the worst malpractitioners in a decreasing order of culpability. Overvaluing of certificates and teachers' ...
Study on a new meteorological sampling scheme developed for the OSCAAR code system
International Nuclear Information System (INIS)
Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu
2002-03-01
One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)
Development and feasibility testing of the Pediatric Emergency Discharge Interaction Coding Scheme.
Curran, Janet A; Taylor, Alexandra; Chorney, Jill; Porter, Stephen; Murphy, Andrea; MacPhee, Shannon; Bishop, Andrea; Haworth, Rebecca
2017-08-01
Discharge communication is an important aspect of high-quality emergency care. This study addresses the gap in knowledge on how to describe discharge communication in a paediatric emergency department (ED). The objective of this feasibility study was to develop and test a coding scheme to characterize discharge communication between health-care providers (HCPs) and caregivers who visit the ED with their children. The Pediatric Emergency Discharge Interaction Coding Scheme (PEDICS) and coding manual were developed following a review of the literature and an iterative refinement process involving HCP observations, inter-rater assessments and team consensus. The coding scheme was pilot-tested through observations of HCPs across a range of shifts in one urban paediatric ED. Overall, 329 patient observations were carried out across 50 observational shifts. Inter-rater reliability was evaluated in 16% of the observations. The final version of the PEDICS contained 41 communication elements. Kappa scores were greater than .60 for the majority of communication elements. The most frequently observed communication elements were under the Introduction node and the least frequently observed were under the Social Concerns node. HCPs initiated the majority of the communication. Pediatric Emergency Discharge Interaction Coding Scheme addresses an important gap in the discharge communication literature. The tool is useful for mapping patterns of discharge communication between HCPs and caregivers. Results from our pilot test identified deficits in specific areas of discharge communication that could impact adherence to discharge instructions. The PEDICS would benefit from further testing with a different sample of HCPs. © 2017 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Yen, Chih-Ta; Huang, Jen-Fa; Chang, Yao-Tang; Chen, Bo-Hau
2010-12-01
We present an experiment demonstrating the spectral-polarization coding optical code-division multiple-access system introduced with a nonideal state of polarization (SOP) matching conditions. In the proposed system, the encoding and double balanced-detection processes are implemented using a polarization-diversity scheme. Because of the quasiorthogonality of Hadamard codes combining with array waveguide grating routers and a polarization beam splitter, the proposed codec pair can encode-decode multiple code words of Hadamard code while retaining the ability for multiple-access interference cancellation. The experimental results demonstrate that when the system is maintained with an orthogonal SOP for each user, an effective reduction in the phase-induced intensity noise is obtained. The analytical SNR values are found to overstate the experimental results by around 2 dB when the received effective power is large. This is mainly limited by insertion losses of components and a nonflattened optical light source. Furthermore, the matching conditions can be improved by decreasing nonideal influences.
Implicit and semi-implicit schemes in the Versatile Advection Code : numerical tests
Tóth, G.; Keppens, R.; Bochev, Mikhail A.
1998-01-01
We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing
Fair ranking of researchers and research teams.
Vavryčuk, Václav
2018-01-01
The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier).
Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.
2018-01-01
High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw [1] how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemann problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. The upwind scheme is shown to be robust and provide high-order accuracy.
Third Order Reconstruction of the KP Scheme for Model of River Tinnelva
Directory of Open Access Journals (Sweden)
Susantha Dissanayake
2017-01-01
Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.
Airfoil noise computation use high-order schemes
DEFF Research Database (Denmark)
Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær
2007-01-01
High-order finite difference schemes with at least 4th-order spatial accuracy are used to simulate aerodynamically generated noise. The aeroacoustic solver with 4th-order up to 8th-order accuracy is implemented into the in-house flow solver, EllipSys2D/3D. Dispersion-Relation-Preserving (DRP) fin...
JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding
Directory of Open Access Journals (Sweden)
Thomas André
2007-03-01
Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.
JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding
Directory of Open Access Journals (Sweden)
André Thomas
2007-01-01
Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.
Lisman, John
2005-01-01
In the hippocampus, oscillations in the theta and gamma frequency range occur together and interact in several ways, indicating that they are part of a common functional system. It is argued that these oscillations form a coding scheme that is used in the hippocampus to organize the readout from long-term memory of the discrete sequence of upcoming places, as cued by current position. This readout of place cells has been analyzed in several ways. First, plots of the theta phase of spikes vs. position on a track show a systematic progression of phase as rats run through a place field. This is termed the phase precession. Second, two cells with nearby place fields have a systematic difference in phase, as indicated by a cross-correlation having a peak with a temporal offset that is a significant fraction of a theta cycle. Third, several different decoding algorithms demonstrate the information content of theta phase in predicting the animal's position. It appears that small phase differences corresponding to jitter within a gamma cycle do not carry information. This evidence, together with the finding that principle cells fire preferentially at a given gamma phase, supports the concept of theta/gamma coding: a given place is encoded by the spatial pattern of neurons that fire in a given gamma cycle (the exact timing within a gamma cycle being unimportant); sequential places are encoded in sequential gamma subcycles of the theta cycle (i.e., with different discrete theta phase). It appears that this general form of coding is not restricted to readout of information from long-term memory in the hippocampus because similar patterns of theta/gamma oscillations have been observed in multiple brain regions, including regions involved in working memory and sensory integration. It is suggested that dual oscillations serve a general function: the encoding of multiple units of information (items) in a way that preserves their serial order. The relationship of such coding to
Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian
2018-05-10
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed
An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Shidi Yu
2018-05-01
Full Text Available Due to the Software Defined Network (SDN technology, Wireless Sensor Networks (WSNs are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1 with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2 As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3 The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that
Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.
Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting
2012-09-01
In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.
DEFF Research Database (Denmark)
Ryttov, Thomas A.; Shrock, Robert
2017-01-01
, adjoint, and symmetric rank-2 tensor representation are considered. We present scheme-independent calculations of the anomalous dimension $\\gamma_{\\bar\\psi\\psi,IR}$ to $O(\\Delta_f^4)$ and $\\beta'_{IR}$ to $O(\\Delta_f^5)$ at this IRFP, where $\\Delta_f$ is an $N_f$-dependent expansion parameter. Comparisons...... are made with conventional $n$-loop calculations and lattice measurements. As a test of the accuracy of the $\\Delta_f$ expansion, we calculate $\\gamma_{\\bar\\psi\\psi,IR}$ to $O(\\Delta_f^3)$ in ${\\cal N}=1$ SU($N_c$) supersymmetric quantum chromodynamics and find complete agreement, to this order...
Order functions and evaluation codes
DEFF Research Database (Denmark)
Høholdt, Tom; Pellikaan, Ruud; van Lint, Jack
1997-01-01
Based on the notion of an order function we construct and determine the parameters of a class of error-correcting evaluation codes. This class includes the one-point algebraic geometry codes as wella s the generalized Reed-Muller codes and the parameters are detremined without using the heavy...... machinery of algebraic geometry....
Application of third order stochastic dominance algorithm in investments ranking
Directory of Open Access Journals (Sweden)
Lončar Sanja
2012-01-01
Full Text Available The paper presents the use of third order stochastic dominance in ranking Investment alternatives, using TSD algorithms (Levy, 2006for testing third order stochastic dominance. The main goal of using TSD rule is minimization of efficient investment set for investor with risk aversion, who prefers more money and likes positive skew ness.
Password Authentication Based on Fractal Coding Scheme
Directory of Open Access Journals (Sweden)
Nadia M. G. Al-Saidi
2012-01-01
Full Text Available Password authentication is a mechanism used to authenticate user identity over insecure communication channel. In this paper, a new method to improve the security of password authentication is proposed. It is based on the compression capability of the fractal image coding to provide an authorized user a secure access to registration and login process. In the proposed scheme, a hashed password string is generated and encrypted to be captured together with the user identity using text to image mechanisms. The advantage of fractal image coding is to be used to securely send the compressed image data through a nonsecured communication channel to the server. The verification of client information with the database system is achieved in the server to authenticate the legal user. The encrypted hashed password in the decoded fractal image is recognized using optical character recognition. The authentication process is performed after a successful verification of the client identity by comparing the decrypted hashed password with those which was stored in the database system. The system is analyzed and discussed from the attacker’s viewpoint. A security comparison is performed to show that the proposed scheme provides an essential security requirement, while their efficiency makes it easier to be applied alone or in hybrid with other security methods. Computer simulation and statistical analysis are presented.
Ranking structures and rank-rank correlations of countries: The FIFA and UEFA cases
Ausloos, Marcel; Cloots, Rudi; Gadomski, Adam; Vitanov, Nikolay K.
2014-04-01
Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures in both cases.
Analysis of a fourth-order compact scheme for convection-diffusion
International Nuclear Information System (INIS)
Yavneh, I.
1997-01-01
In, 1984 Gupta et al. introduced a compact fourth-order finite-difference convection-diffusion operator with some very favorable properties. In particular, this scheme does not seem to suffer excessively from spurious oscillatory behavior, and it converges with standard methods such as Gauss Seidel or SOR (hence, multigrid) regardless of the diffusion. This scheme has been rederived, developed (including some variations), and applied in both convection-diffusion and Navier-Stokes equations by several authors. Accurate solutions to high Reynolds-number flow problems at relatively coarse resolutions have been reported. These solutions were often compared to those obtained by lower order discretizations, such as second-order central differences and first-order upstream discretizations. The latter, it was stated, achieved far less accurate results due to the artificial viscosity, which the compact scheme did not include. We show here that, while the compact scheme indeed does not suffer from a cross-stream artificial viscosity (as does the first-order upstream scheme when the characteristic direction is not aligned with the grid), it does include a streamwise artificial viscosity that is inversely proportional to the natural viscosity. This term is not always benign. 7 refs., 1 fig., 1 tab
SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code
Energy Technology Data Exchange (ETDEWEB)
Shalaby, Mohamad; Broderick, Avery E. [Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada); Chang, Philip [Department of Physics, University of Wisconsin-Milwaukee, 1900 E. Kenwood Boulevard, Milwaukee, WI 53211 (United States); Pfrommer, Christoph [Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam (Germany); Lamberts, Astrid [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Puchwein, Ewald, E-mail: mshalaby@live.ca [Institute of Astronomy and Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA (United Kingdom)
2017-05-20
Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutions of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.
Barendregt, W.; Bekker, M.M.
2006-01-01
This article describes the development and assessment of a coding scheme for finding both usability and fun problems through observations of young children playing computer games during user tests. The proposed coding scheme is based on an existing list of breakdown indication types of the detailed
Rank-Ordered Multifractal Analysis (ROMA of probability distributions in fluid turbulence
Directory of Open Access Journals (Sweden)
C. C. Wu
2011-04-01
Full Text Available Rank-Ordered Multifractal Analysis (ROMA was introduced by Chang and Wu (2008 to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU turbulence database. In addition, a new way of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF simultaneously is introduced.
An Adaptive Motion Estimation Scheme for Video Coding
Directory of Open Access Journals (Sweden)
Pengyu Liu
2014-01-01
Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.
Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N
2017-05-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
Development of explicit solution scheme for the MATRA-LMR code and test calculation
International Nuclear Information System (INIS)
Jeong, H. Y.; Ha, K. S.; Chang, W. P.; Kwon, Y. M.; Jeong, K. S.
2003-01-01
The local blockage in a subassembly of a liquid metal reactor is of particular importance because local sodium boiling could occur at the downstream of the blockage and integrity of the fuel clad could be threatened. The explicit solution scheme of MATRA-LMR code is developed to analyze the flow blockage in a subassembly of a liquid metal cooled reactor. In the present study, the capability of the code is extended to the analysis of complete blockage of one or more subchannels. The results of the developed solution scheme shows very good agreement with the results obtained from the implicit scheme for the experiments of flow channel without any blockage. The applicability of the code is also evaluated for two typical experiments in a blocked channel. Through the sensitivity study, it is shown that the explicit scheme of MATRA-LMR predicts the flow and temperature profile after blockage reasonably if the effect of wire is suitably modeled. The simple assumption in wire-forcing function is effective for the un-blocked case or for the case of blockage with lower velocity. A different type of wire-forcing function describing the velocity reduction after blockage or an accurate distributed resistance model is required for more improved predictions
Code-Hopping Based Transmission Scheme for Wireless Physical-Layer Security
Directory of Open Access Journals (Sweden)
Liuguo Yin
2018-01-01
Full Text Available Due to the broadcast and time-varying natures of wireless channels, traditional communication systems that provide data encryption at the application layer suffer many challenges such as error diffusion. In this paper, we propose a code-hopping based secrecy transmission scheme that uses dynamic nonsystematic low-density parity-check (LDPC codes and automatic repeat-request (ARQ mechanism to jointly encode and encrypt source messages at the physical layer. In this scheme, secret keys at the transmitter and the legitimate receiver are generated dynamically upon the source messages that have been transmitted successfully. During the transmission, each source message is jointly encoded and encrypted by a parity-check matrix, which is dynamically selected from a set of LDPC matrices based on the shared dynamic secret key. As for the eavesdropper (Eve, the uncorrectable decoding errors prevent her from generating the same secret key as the legitimate parties. Thus she cannot select the correct LDPC matrix to recover the source message. We demonstrate that our scheme can be compatible with traditional cryptosystems and enhance the security without sacrificing the error-correction performance. Numerical results show that the bit error rate (BER of Eve approaches 0.5 as the number of transmitted source messages increases and the security gap of the system is small.
Benchmark studies of BOUT++ code and TPSMBI code on neutral transport during SMBI
Energy Technology Data Exchange (ETDEWEB)
Wang, Y.H. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); University of Science and Technology of China, Hefei 230026 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Z.H., E-mail: zhwang@swip.ac.cn [Southwestern Institute of Physics, Chengdu 610041 (China); Guo, W., E-mail: wfguo@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Ren, Q.L. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Sun, A.P.; Xu, M.; Wang, A.K. [Southwestern Institute of Physics, Chengdu 610041 (China); Xiang, N. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China)
2017-06-09
SMBI (supersonic molecule beam injection) plays an important role in tokamak plasma fuelling, density control and ELM mitigation in magnetic confinement plasma physics, which has been widely used in many tokamaks. The trans-neut module of BOUT++ code is the only large-scale parallel 3D fluid code used to simulate the SMBI fueling process, while the TPSMBI (transport of supersonic molecule beam injection) code is a recent developed 1D fluid code of SMBI. In order to find a method to increase SMBI fueling efficiency in H-mode plasma, especially for ITER, it is significant to first verify the codes. The benchmark study between the trans-neut module of BOUT++ code and the TPSMBI code on radial transport dynamics of neutral during SMBI has been first successfully achieved in both slab and cylindrical coordinates. The simulation results from the trans-neut module of BOUT++ code and TPSMBI code are consistent very well with each other. Different upwind schemes have been compared to deal with the sharp gradient front region during the inward propagation of SMBI for the code stability. The influence of the WENO3 (weighted essentially non-oscillatory) and the third order upwind schemes on the benchmark results has also been discussed. - Highlights: • A 1D model of SMBI has developed. • Benchmarks of BOUT++ and TPSMBI codes have first been finished. • The influence of the WENO3 and the third order upwind schemes on the benchmark results has also been discussed.
Network coding multiuser scheme for indoor visible light communications
Zhang, Jiankun; Dang, Anhong
2017-12-01
Visible light communication (VLC) is a unique alternative for indoor data transfer and developing beyond point-to-point. However, for realizing high-capacity networks, VLC is facing challenges including the constrained bandwidth of the optical access point and random occlusion. A network coding scheme for VLC (NC-VLC) is proposed, with increased throughput and system robustness. Based on the Lambertian illumination model, theoretical decoding failure probability of the multiuser NC-VLC system is derived, and the impact of the system parameters on the performance is analyzed. Experiments demonstrate the proposed scheme successfully in the indoor multiuser scenario. These results indicate that the NC-VLC system shows a good performance under the link loss and random occlusion.
Systematic Luby Transform codes as incremental redundancy scheme
CSIR Research Space (South Africa)
Grobler, TL
2011-09-01
Full Text Available Transform Codes as Incremental Redundancy Scheme T. L. Grobler y, E. R. Ackermann y, J. C. Olivier y and A. J. van Zylz Department of Electrical, Electronic and Computer Engineering University of Pretoria, Pretoria 0002, South Africa Email: trienkog...@gmail.com, etienne.ackermann@ieee.org yDefence, Peace, Safety and Security (DPSS) Council for Scientific and Industrial Research (CSIR), Pretoria 0001, South Africa zDepartment of Mathematics and Applied Mathematics University of Pretoria, Pretoria 0002, South...
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-08-12
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Kim, Dong-Sun; Kwon, Jin-San
2014-09-18
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.
A New Quantum Key Distribution Scheme Based on Frequency and Time Coding
International Nuclear Information System (INIS)
Chang-Hua, Zhu; Chang-Xing, Pei; Dong-Xiao, Quan; Jing-Liang, Gao; Nan, Chen; Yun-Hui, Yi
2010-01-01
A new scheme of quantum key distribution (QKD) using frequency and time coding is proposed, in which the security is based on the frequency-time uncertainty relation. In this scheme, the binary information sequence is encoded randomly on either the central frequency or the time delay of the optical pulse at the sender. The central frequency of the single photon pulse is set as ω 1 for bit 0 and set as ω 2 for bit 1 when frequency coding is selected. However, the single photon pulse is not delayed for bit 0 and is delayed in τ for 1 when time coding is selected. At the receiver, either the frequency or the time delay of the pulse is measured randomly, and the final key is obtained after basis comparison, data reconciliation and privacy amplification. With the proposed method, the effect of the noise in the fiber channel and environment on the QKD system can be reduced effectively
Optimized low-order explicit Runge-Kutta schemes for high- order spectral difference method
Parsani, Matteo
2012-01-01
Optimal explicit Runge-Kutta (ERK) schemes with large stable step sizes are developed for method-of-lines discretizations based on the spectral difference (SD) spatial discretization on quadrilateral grids. These methods involve many stages and provide the optimal linearly stable time step for a prescribed SD spectrum and the minimum leading truncation error coefficient, while admitting a low-storage implementation. Using a large number of stages, the new ERK schemes lead to efficiency improvements larger than 60% over standard ERK schemes for 4th- and 5th-order spatial discretization.
International Nuclear Information System (INIS)
Gomez-Torres, Armando Miguel; Sanchez-Espinoza, Victor Hugo; Ivanov, Kostadin; Macian-Juan, Rafael
2012-01-01
Highlights: ► A fixed point iteration (FPI) is implemented in DYNSUB. ► Comparisons between the explicit scheme and the FPI are done. ► The FPI scheme allows moving from one time step to the other with converged solution. ► FPI allows the use of larger time steps without compromising the accuracy of results. ► FPI results are promising and represent an option in order to optimize calculations. -- Abstract: DYNSUB is a novel two-way pin-based coupling of the simplified transport (SP 3 ) version of DYN3D with the subchannel code SUBCHANFLOW. The new coupled code system allows for a more realistic description of the core behaviour under steady state and transients conditions, and has been widely described in Part I of this paper. Additionally to the explicit coupling developed and described in Part I, a nested loop iteration or fixed point iteration (FPI) is implemented in DYNSUB. A FPI is not an implicit scheme but approximates it by adding an iteration loop to the current explicit scheme. The advantage of the method is that it allows the use of larger time steps; however the nested loop iteration could take much more time in getting a converged solution that could be less efficient than the explicit scheme with small time steps. A comparison of the two temporal schemes is performed. The results using FPI are very promising and represent a very good option in order to optimize computational times without losing accuracy. However it is also shown that a FPI scheme can produce inaccurate results if the time step is not chosen in agreement with the analyzed transient.
SOLVING FRACTIONAL-ORDER COMPETITIVE LOTKA-VOLTERRA MODEL BY NSFD SCHEMES
Directory of Open Access Journals (Sweden)
S.ZIBAEI
2016-12-01
Full Text Available In this paper, we introduce fractional-order into a model competitive Lotka- Volterra prey-predator system. We will discuss the stability analysis of this fractional system. The non-standard nite difference (NSFD scheme is implemented to study the dynamic behaviors in the fractional-order Lotka-Volterra system. Proposed non-standard numerical scheme is compared with the forward Euler and fourth order Runge-Kutta methods. Numerical results show that the NSFD approach is easy and accurate for implementing when applied to fractional-order Lotka-Volterra model.
A test data compression scheme based on irrational numbers stored coding.
Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan
2014-01-01
Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.
Huang, Jen-Fa; Meng, Sheng-Hui; Lin, Ying-Chen
2014-11-01
The optical code-division multiple-access (OCDMA) technique is considered a good candidate for providing optical layer security. An enhanced OCDMA network security mechanism with a pseudonoise (PN) random digital signals type of maximal-length sequence (M-sequence) code switching to protect against eavesdropping is presented. Signature codes unique to individual OCDMA-network users are reconfigured according to the register state of the controlling electrical shift registers. Examples of signature reconfiguration following state switching of the controlling shift register for both the network user and the eavesdropper are numerically illustrated. Dynamically changing the PN state of the shift register to reconfigure the user signature sequence is shown; this hinders eavesdroppers' efforts to decode correct data sequences. The proposed scheme increases the probability of eavesdroppers committing errors in decoding and thereby substantially enhances the degree of an OCDMA network's confidentiality.
Reduced-Rank Chip-Level MMSE Equalization for the 3G CDMA Forward Link with Code-Multiplexed Pilot
Directory of Open Access Journals (Sweden)
Goldstein J Scott
2002-01-01
Full Text Available This paper deals with synchronous direct-sequence code-division multiple access (CDMA transmission using orthogonal channel codes in frequency selective multipath, motivated by the forward link in 3G CDMA systems. The chip-level minimum mean square error (MMSE estimate of the (multiuser synchronous sum signal transmitted by the base, followed by a correlate and sum, has been shown to perform very well in saturated systems compared to a Rake receiver. In this paper, we present the reduced-rank, chip-level MMSE estimation based on the multistage nested Wiener filter (MSNWF. We show that, for the case of a known channel, only a small number of stages of the MSNWF is needed to achieve near full-rank MSE performance over a practical single-to-noise ratio (SNR range. This holds true even for an edge-of-cell scenario, where two base stations are contributing near equal-power signals, as well as for the single base station case. We then utilize the code-multiplexed pilot channel to train the MSNWF coefficients and show that adaptive MSNWF operating in a very low rank subspace performs slightly better than full-rank recursive least square (RLS and significantly better than least mean square (LMS. An important advantage of the MSNWF is that it can be implemented in a lattice structure, which involves significantly less computation than RLS. We also present structured MMSE equalizers that exploit the estimate of the multipath arrival times and the underlying channel structure to project the data vector onto a much lower dimensional subspace. Specifically, due to the sparseness of high-speed CDMA multipath channels, the channel vector lies in the subspace spanned by a small number of columns of the pulse shaping filter convolution matrix. We demonstrate that the performance of these structured low-rank equalizers is much superior to unstructured equalizers in terms of convergence speed and error rates.
Universal block diagram based modeling and simulation schemes for fractional-order control systems.
Bai, Lu; Xue, Dingyü
2017-05-08
Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Ji, Xing; Zhao, Fengxiang; Shyy, Wei; Xu, Kun
2018-03-01
Most high order computational fluid dynamics (CFD) methods for compressible flows are based on Riemann solver for the flux evaluation and Runge-Kutta (RK) time stepping technique for temporal accuracy. The advantage of this kind of space-time separation approach is the easy implementation and stability enhancement by introducing more middle stages. However, the nth-order time accuracy needs no less than n stages for the RK method, which can be very time and memory consuming due to the reconstruction at each stage for a high order method. On the other hand, the multi-stage multi-derivative (MSMD) method can be used to achieve the same order of time accuracy using less middle stages with the use of the time derivatives of the flux function. For traditional Riemann solver based CFD methods, the lack of time derivatives in the flux function prevents its direct implementation of the MSMD method. However, the gas kinetic scheme (GKS) provides such a time accurate evolution model. By combining the second-order or third-order GKS flux functions with the MSMD technique, a family of high order gas kinetic methods can be constructed. As an extension of the previous 2-stage 4th-order GKS, the 5th-order schemes with 2 and 3 stages will be developed in this paper. Based on the same 5th-order WENO reconstruction, the performance of gas kinetic schemes from the 2nd- to the 5th-order time accurate methods will be evaluated. The results show that the 5th-order scheme can achieve the theoretical order of accuracy for the Euler equations, and present accurate Navier-Stokes solutions as well due to the coupling of inviscid and viscous terms in the GKS formulation. In comparison with Riemann solver based 5th-order RK method, the high order GKS has advantages in terms of efficiency, accuracy, and robustness, for all test cases. The 4th- and 5th-order GKS have the same robustness as the 2nd-order scheme for the capturing of discontinuous solutions. The current high order MSMD GKS is a
Optical Code-Division Multiple-Access and Wavelength Division Multiplexing: Hybrid Scheme Review
P. Susthitha Menon; Sahbudin Shaari; Isaac A.M. Ashour; Hesham A. Bakarman
2012-01-01
Problem statement: Hybrid Optical Code-Division Multiple-Access (OCDMA) and Wavelength-Division Multiplexing (WDM) have flourished as successful schemes for expanding the transmission capacity as well as enhancing the security for OCDMA. However, a comprehensive review related to this hybrid system are lacking currently. Approach: The purpose of this paper is to review the literature on OCDMA-WDM overlay systems, including our hybrid approach of one-dimensional coding of SAC OCDMA with WDM si...
LDPC product coding scheme with extrinsic information for bit patterned media recoding
Jeong, Seongkwon; Lee, Jaejin
2017-05-01
Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.
Evaluation Codes from Order Domain Theory
DEFF Research Database (Denmark)
Andersen, Henning Ejnar; Geil, Hans Olav
2008-01-01
bound is easily extended to deal with any generalized Hamming weights. We interpret our methods into the setting of order domain theory. In this way we fill in an obvious gap in the theory of order domains. [28] T. Shibuya and K. Sakaniwa, A Dual of Well-Behaving Type Designed Minimum Distance, IEICE......The celebrated Feng-Rao bound estimates the minimum distance of codes defined by means of their parity check matrices. From the Feng-Rao bound it is clear how to improve a large family of codes by leaving out certain rows in their parity check matrices. In this paper we derive a simple lower bound...... on the minimum distance of codes defined by means of their generator matrices. From our bound it is clear how to improve a large family of codes by adding certain rows to their generator matrices. The new bound is very much related to the Feng-Rao bound as well as to Shibuya and Sakaniwa's bound in [28]. Our...
Tiwari, Samrat Vikramaditya; Sewaiwar, Atul; Chung, Yeon-Ho
2015-10-01
In optical wireless communications, multiple channel transmission is an attractive solution to enhancing capacity and system performance. A new modulation scheme called color coded multiple access (CCMA) for bidirectional multiuser visible light communications (VLC) is presented for smart home applications. The proposed scheme uses red, green and blue (RGB) light emitting diodes (LED) for downlink and phosphor based white LED (P-LED) for uplink to establish a bidirectional VLC and also employs orthogonal codes to support multiple users and devices. The downlink transmission for data user devices and smart home devices is provided using red and green colors from the RGB LEDs, respectively, while uplink transmission from both types of devices is performed using the blue color from P-LEDs. Simulations are conducted to verify the performance of the proposed scheme. It is found that the proposed bidirectional multiuser scheme is efficient in terms of data rate and performance. In addition, since the proposed scheme uses RGB signals for downlink data transmission, it provides flicker-free illumination that would lend itself to multiuser VLC system for smart home applications.
A Test Data Compression Scheme Based on Irrational Numbers Stored Coding
Directory of Open Access Journals (Sweden)
Hai-feng Wu
2014-01-01
Full Text Available Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS, is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.
Directory of Open Access Journals (Sweden)
Lingyang Song
2007-04-01
Full Text Available We report a simple differential modulation scheme for quasi-orthogonal space-time block codes. A new class of quasi-orthogonal coding structures that can provide partial transmit diversity is presented for various numbers of transmit antennas. Differential encoding and decoding can be simplified for differential Alamouti-like codes by grouping the signals in the transmitted matrix and decoupling the detection of data symbols, respectively. The new scheme can achieve constant amplitude of transmitted signals, and avoid signal constellation expansion; in addition it has a linear signal detector with very low complexity. Simulation results show that these partial-diversity codes can provide very useful results at low SNR for current communication systems. Extension to more than four transmit antennas is also considered.
Energy Technology Data Exchange (ETDEWEB)
Greene, K.R.; Fletcher, C.D.; Gottula, R.C.; Lindquist, T.R.; Stitt, B.D. [Framatome ANP, Richland, WA (United States)
2001-07-01
Licensing analyses of Nuclear Regulatory Commission (NRC) Standard Review Plan (SRP) Chapter 15 non-LOCA transients are an important part of establishing operational safety limits and design limits for nuclear power plants. The applied codes and methods are generally qualified using traditional methods of benchmarking and assessment, sample problems, and demonstration of conservatism. Rigorous formal methods for developing code and methodology have been created and applied to qualify realistic methods for Large Break Loss-of-Coolant Accidents (LBLOCA's). This methodology, Code Scaling, Applicability, and Uncertainty (CSAU), is a very demanding, resource intensive, process to apply. It would be challenging to apply a comprehensive and complete CSAU level of analysis, individually, to each of the more than 30 non-LOCA transients that comprise Chapter 15 events. However, certain elements of the process can be easily adapted to improve quality of the codes and methods used to analyze non- LOCA transients. One of these elements is the Phenomena Identification and Ranking Table (PIRT). This paper presents the results of an informally constructed PIRT that applies to non-LOCA transients for Pressurized Water Reactors (PWR's) of the Westinghouse and Combustion Engineering design. A group of experts in thermal-hydraulics and safety analysis identified and ranked the phenomena. To begin the process, the PIRT was initially performed individually by each expert. Then through group interaction and discussion, a consensus was reached on both the significant phenomena and the appropriate ranking. The paper also discusses using the PIRT as an aid to qualify a 'conservative' system code and methodology. Once agreement was obtained on the phenomena and ranking, the table was divided into six functional groups, by nature of the transients, along the same lines as Chapter 15. Then, assessment and disposition of the significant phenomena was performed. The PIRT and
A combined QSAR and partial order ranking approach to risk assessment.
Carlsen, L
2006-04-01
QSAR generated data appear as an attractive alternative to experimental data as foreseen in the proposed new chemicals legislation REACH. A preliminary risk assessment for the aquatic environment can be based on few factors, i.e. the octanol-water partition coefficient (Kow), the vapour pressure (VP) and the potential biodegradability of the compound in combination with the predicted no-effect concentration (PNEC) and the actual tonnage in which the substance is produced. Application of partial order ranking, allowing simultaneous inclusion of several parameters leads to a mutual prioritisation of the investigated substances, the prioritisation possibly being further analysed through the concept of linear extensions and average ranks. The ranking uses endpoint values (log Kow and log VP) derived from strictly linear 'noise-deficient' QSAR models as input parameters. Biodegradation estimates were adopted from the BioWin module of the EPI Suite. The population growth impairment of Tetrahymena pyriformis was used as a surrogate for fish lethality.
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
A VLSI Implementation of Rank-Order Searching Circuit Employing a Time-Domain Technique
Directory of Open Access Journals (Sweden)
Trong-Tu Bui
2013-01-01
Full Text Available We present a compact and low-power rank-order searching (ROS circuit that can be used for building associative memories and rank-order filters (ROFs by employing time-domain computation and floating-gate MOS techniques. The architecture inherits the accuracy and programmability of digital implementations as well as the compactness and low-power consumption of analog ones. We aim to implement identification function as the first priority objective. Filtering function would be implemented once the location identification function has been carried out. The prototype circuit was designed and fabricated in a 0.18 μm CMOS technology. It consumes only 132.3 μW for an eight-input demonstration case.
On the Need of Novel Medium Access Control Schemes for Network Coding enabled Wireless Mesh Networks
DEFF Research Database (Denmark)
Paramanathan, Achuthan; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani
2013-01-01
that network coding will improve the throughput in such systems, but our novel medium access scheme improves the performance in the cross topology by another 66 % for network coding and 150 % for classical forwarding in theory. These gains translate in a theoretical gain of 33 % of network coding over...
LDPC product coding scheme with extrinsic information for bit patterned media recoding
Directory of Open Access Journals (Sweden)
Seongkwon Jeong
2017-05-01
Full Text Available Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI and inter-track interference (ITI occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.
Nieh, Ta-Chun; Yang, Chao-Chin; Huang, Jen-Fa
2011-08-01
A complete complementary/prime/shifted prime (CPS) code family for the optical code-division multiple-access (OCDMA) system is proposed. Based on the ability of complete complementary (CC) code, the multiple-access interference (MAI) can be suppressed and eliminated via spectral amplitude coding (SAC) OCDMA system under asynchronous/synchronous transmission. By utilizing the shifted prime (SP) code in the SAC scheme, the hardware implementation of encoder/decoder can be simplified with a reduced number of optical components, such as arrayed waveguide grating (AWG) and fiber Bragg grating (FBG). This system has a superior performance as compared to previous bipolar-bipolar coding OCDMA systems.
Compact high order schemes with gradient-direction derivatives for absorbing boundary conditions
Gordon, Dan; Gordon, Rachel; Turkel, Eli
2015-09-01
We consider several compact high order absorbing boundary conditions (ABCs) for the Helmholtz equation in three dimensions. A technique called "the gradient method" (GM) for ABCs is also introduced and combined with the high order ABCs. GM is based on the principle of using directional derivatives in the direction of the wavefront propagation. The new ABCs are used together with the recently introduced compact sixth order finite difference scheme for variable wave numbers. Experiments on problems with known analytic solutions produced very accurate results, demonstrating the efficacy of the high order schemes, particularly when combined with GM. The new ABCs are then applied to the SEG/EAGE Salt model, showing the advantages of the new schemes.
An accurate scheme by block method for third order ordinary ...
African Journals Online (AJOL)
problems of ordinary differential equations is presented in this paper. The approach of collocation approximation is adopted in the derivation of the scheme and then the scheme is applied as simultaneous integrator to special third order initial value problem of ordinary differential equations. This implementation strategy is ...
Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows
Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang
2009-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
Hazard-ranking of agricultural pesticides for chronic health effects in Yuma County, Arizona.
Sugeng, Anastasia J; Beamer, Paloma I; Lutz, Eric A; Rosales, Cecilia B
2013-10-01
With thousands of pesticides registered by the United States Environmental Protection Agency, it not feasible to sample for all pesticides applied in agricultural communities. Hazard-ranking pesticides based on use, toxicity, and exposure potential can help prioritize community-specific pesticide hazards. This study applied hazard-ranking schemes for cancer, endocrine disruption, and reproductive/developmental toxicity in Yuma County, Arizona. An existing cancer hazard-ranking scheme was modified, and novel schemes for endocrine disruption and reproductive/developmental toxicity were developed to rank pesticide hazards. The hazard-ranking schemes accounted for pesticide use, toxicity, and exposure potential based on chemical properties of each pesticide. Pesticides were ranked as hazards with respect to each health effect, as well as overall chronic health effects. The highest hazard-ranked pesticides for overall chronic health effects were maneb, metam-sodium, trifluralin, pronamide, and bifenthrin. The relative pesticide rankings were unique for each health effect. The highest hazard-ranked pesticides differed from those most heavily applied, as well as from those previously detected in Yuma homes over a decade ago. The most hazardous pesticides for cancer in Yuma County, Arizona were also different from a previous hazard-ranking applied in California. Hazard-ranking schemes that take into account pesticide use, toxicity, and exposure potential can help prioritize pesticides of greatest health risk in agricultural communities. This study is the first to provide pesticide hazard-rankings for endocrine disruption and reproductive/developmental toxicity based on use, toxicity, and exposure potential. These hazard-ranking schemes can be applied to other agricultural communities for prioritizing community-specific pesticide hazards to target decreasing health risk. Copyright © 2013 Elsevier B.V. All rights reserved.
Hazard-Ranking of Agricultural Pesticides for Chronic Health Effects in Yuma County, Arizona
Sugeng, Anastasia J.; Beamer, Paloma I.; Lutz, Eric A.; Rosales, Cecilia B.
2013-01-01
With thousands of pesticides registered by the United States Environmental Protection Agency, it not feasible to sample for all pesticides applied in agricultural communities. Hazard-ranking pesticides based on use, toxicity, and exposure potential can help prioritize community-specific pesticide hazards. This study applied hazard-ranking schemes for cancer, endocrine disruption, and reproductive/developmental toxicity in Yuma County, Arizona. An existing cancer hazard-ranking scheme was modified, and novel schemes for endocrine disruption and reproductive/developmental toxicity were developed to rank pesticide hazards. The hazard-ranking schemes accounted for pesticide use, toxicity, and exposure potential based on chemical properties of each pesticide. Pesticides were ranked as hazards with respect to each health effect, as well as overall chronic health effects. The highest hazard-ranked pesticides for overall chronic health effects were maneb, metam sodium, trifluralin, pronamide, and bifenthrin. The relative pesticide rankings were unique for each health effect. The highest hazard-ranked pesticides differed from those most heavily applied, as well as from those previously detected in Yuma homes over a decade ago. The most hazardous pesticides for cancer in Yuma County, Arizona were also different from a previous hazard-ranking applied in California. Hazard-ranking schemes that take into account pesticide use, toxicity, and exposure potential can help prioritize pesticides of greatest health risk in agricultural communities. This study is the first to provide pesticide hazard-rankings for endocrine disruption and reproductive/developmental toxicity based on use, toxicity, and exposure potential. These hazard-ranking schemes can be applied to other agricultural communities for prioritizing community-specific pesticide hazards to target decreasing health risk. PMID:23783270
Adams, Bradley J; Aschheim, Kenneth W
2016-01-01
Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.
Ross, David A; Moore, Edward Z
2013-09-01
As part of the National Resident Matching Program, programs must submit a rank order list of desired applicants. Despite the importance of this process and the numerous manifest limitations with traditional approaches, minimal research has been conducted to examine the accuracy of different ranking strategies. The authors developed the Moore Optimized Ordinal Rank Estimator (MOORE), a novel algorithm for ranking applicants that is based on college sports ranking systems. Because it is not possible to study the Match in vivo, the authors then designed the Recruitment Outcomes Simulation System (ROSS). This program was used to simulate a series of interview seasons and to compare MOORE and traditional approaches under different conditions. The accuracy of traditional ranking and the MOORE approach are equally and adversely affected with higher levels of intrarater variability. However, compared with traditional ranking methods, MOORE produces a more accurate rank order list as interrater variability increases. The present data demonstrate three key findings. First, they provide proof of concept that it is possible to scientifically test the accuracy of different rank methods used in the Match. Second, they show that small amounts of variability can have a significant adverse impact on the accuracy of rank order lists. Finally, they demonstrate that an ordinal approach may lead to a more accurate rank order list in the presence of interviewer bias. The ROSS-MOORE approach offers programs a novel way to optimize the recruitment process and, potentially, to construct a more accurate rank order list.
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-03-01
A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.
Directory of Open Access Journals (Sweden)
SAJJAD ALIMEMON
2017-10-01
Full Text Available Multicarrier transmission technique has become a prominent transmission technique in high-speed wireless communication systems. It is due to its frequency diversity,small inter-symbol interference in the multipath fading channel, simple equalizer structure, and high bandwidth efficiency. Nevertheless, in thetime domain, multicarrier transmission signal has high PAPR (Peak-to-Average Power Ratio thatinterprets to low power amplifier efficiencies. To decrease the PAPR, a CCSLM (Convolutional Code Selective Mapping scheme for multicarrier transmission with a high number of subcarriers is proposed in this paper. Proposed scheme is based on SLM method and employs interleaver and convolutional coding. Related works on the PAPR reduction have considered either 128 or 256 number of subcarriers. However, PAPR of multicarrier transmission signal will increase as a number of subcarriers increases. The proposed method achieves significant PAPR reduction for ahigher number of subcarriers as well as better power amplifier efficiency. Simulation outcomes validate the usefulness of projected scheme.
International Nuclear Information System (INIS)
Santamarina, A.
1991-01-01
A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)
Construction of low dissipative high-order well-balanced filter schemes for non-equilibrium flows
International Nuclear Information System (INIS)
Wang Wei; Yee, H.C.; Sjoegreen, Bjoern; Magin, Thierry; Shu, Chi-Wang
2011-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. (2009) to a class of low dissipative high-order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. More general 1D and 2D reacting flow models and new examples of shock turbulence interactions are provided to demonstrate the advantage of well-balanced schemes. The class of filter schemes developed by Yee et al. (1999) , Sjoegreen and Yee (2004) and Yee and Sjoegreen (2007) consist of two steps, a full time step of spatially high-order non-dissipative base scheme and an adaptive non-linear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand-alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e. choosing a well-balanced base scheme with a well-balanced filter (both with high-order accuracy). A typical class of these schemes shown in this paper is the high-order central difference schemes/predictor-corrector (PC) schemes with a high-order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady-state solutions exactly; it is able to capture small perturbations, e.g. turbulence fluctuations; and it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
Semantic association ranking schemes for information retrieval ...
Indian Academy of Sciences (India)
retrieval applications using term association graph representation ... Department of Computer Science and Engineering, Government College of ... Introduction ... leads to poor precision, e.g., model, python, and chip. ...... The approaches proposed in this paper focuses on the query-centric re-ranking of search results.
Celandroni, Nedo; Ferro, Erina; Mihal, Vlado; Potort?, Francesco
1992-01-01
This report describes the FODA system working at variable coding and bit rates (FODA/IBEA-TDMA) FODA/IBEA is the natural evolution of the FODA-TDMA satellite access scheme working at 2 Mbit/s fixed rate with data 1/2 coded or uncoded. FODA-TDMA was used in the European SATINE-II experiment [8]. We remind here that the term FODA/IBEA system is comprehensive of the FODA/IBEA-TDMA (1) satellite access scheme and of the hardware prototype realised by the Marconi R.C. (U.K.). Both of them come fro...
International Nuclear Information System (INIS)
Zhong Xiaolin; Tatineni, Mahidhar
2003-01-01
The direct numerical simulation of receptivity, instability and transition of hypersonic boundary layers requires high-order accurate schemes because lower-order schemes do not have an adequate accuracy level to compute the large range of time and length scales in such flow fields. The main limiting factor in the application of high-order schemes to practical boundary-layer flow problems is the numerical instability of high-order boundary closure schemes on the wall. This paper presents a family of high-order non-uniform grid finite difference schemes with stable boundary closures for the direct numerical simulation of hypersonic boundary-layer transition. By using an appropriate grid stretching, and clustering grid points near the boundary, high-order schemes with stable boundary closures can be obtained. The order of the schemes ranges from first-order at the lowest, to the global spectral collocation method at the highest. The accuracy and stability of the new high-order numerical schemes is tested by numerical simulations of the linear wave equation and two-dimensional incompressible flat plate boundary layer flows. The high-order non-uniform-grid schemes (up to the 11th-order) are subsequently applied for the simulation of the receptivity of a hypersonic boundary layer to free stream disturbances over a blunt leading edge. The steady and unsteady results show that the new high-order schemes are stable and are able to produce high accuracy for computations of the nonlinear two-dimensional Navier-Stokes equations for the wall bounded supersonic flow
Pongpirul, Krit
2011-01-01
In the Thai Universal Coverage scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group (DRG) reimbursement. Questionable quality of the submitted DRG codes has been of concern whereas knowledge about hospital coding practice has been lacking. The objectives of this thesis are (1) To explore hospital coding…
Peña, Alejandro; Del Carratore, Francesco; Cummings, Matthew; Takano, Eriko; Breitling, Rainer
2017-12-18
The rapid increase of publicly available microbial genome sequences has highlighted the presence of hundreds of thousands of biosynthetic gene clusters (BGCs) encoding valuable secondary metabolites. The experimental characterization of new BGCs is extremely laborious and struggles to keep pace with the in silico identification of potential BGCs. Therefore, the prioritisation of promising candidates among computationally predicted BGCs represents a pressing need. Here, we propose an output ordering and prioritisation system (OOPS) which helps sorting identified BGCs by a wide variety of custom-weighted biological and biochemical criteria in a flexible and user-friendly interface. OOPS facilitates a judicious prioritisation of BGCs using G+C content, coding sequence length, gene number, cluster self-similarity and codon bias parameters, as well as enabling the user to rank BGCs based upon BGC type, novelty, and taxonomic distribution. Effective prioritisation of BGCs will help to reduce experimental attrition rates and improve the breadth of bioactive metabolites characterized.
Coding Scheme for Assessment of Students’ Explanations and Predictions
Directory of Open Access Journals (Sweden)
Mihael Gojkošek
2017-04-01
Full Text Available In the process of analyzing students’ explanations and predictions for interaction between brightness enhancement ﬁlm and beam of white light, a need for objective and reliable assessment instrumentarose. Consequently, we developed a codingscheme that was mostly inspired by the rubrics for self-assessment of scientiﬁc abilities. In the paper we present the grading categories that were integrated in the coding scheme, and descriptions of criteria used for evaluation of students work. We report the results of reliability analysis of new assessment tool and present some examples of its application.
The missing evaluation codes from order domain theory
DEFF Research Database (Denmark)
Andersen, Henning Ejnar; Geil, Olav
The Feng-Rao bound gives a lower bound on the minimum distance of codes defined by means of their parity check matrices. From the Feng-Rao bound it is clear how to improve a large family of codes by leaving out certain rows in their parity check matrices. In this paper we derive a simple lower...... generalized Hamming weight. We interpret our methods into the setting of order domain theory. In this way we fill in an obvious gap in the theory of order domains. The improved codes from the present paper are not in general equal to the Feng-Rao improved codes but the constructions are very much related....
Alternative Line Coding Scheme with Fixed Dimming for Visible Light Communication
Niaz, M. T.; Imdad, F.; Kim, H. S.
2017-01-01
An alternative line coding scheme called fixed-dimming on/off keying (FD-OOK) is proposed for visible-light communication (VLC). FD-OOK reduces the flickering caused by a VLC transmitter and can maintain a 50% dimming level. Simple encoder and decoder are proposed which generates codes where the number of bits representing one is same as the number of bits representing zero. By keeping the number of ones and zeros equal the change in the brightness of lighting may be minimized and kept constant at 50%, thereby reducing the flickering in VLC. The performance of FD-OOK is analysed with two parameters: the spectral efficiency and power requirement.
Slab geometry spatial discretization schemes with infinite-order convergence
International Nuclear Information System (INIS)
Adams, M.L.; Martin, W.R.
1985-01-01
Spatial discretization schemes for the slab geometry discrete ordinates transport equation have received considerable attention in the past several years, with particular interest shown in developing methods that are more computationally efficient that standard schemes. Here the authors apply to the discrete ordinates equations a spectral method that is significantly more efficient than previously proposed schemes for high-accuracy calculations of homogeneous problems. This is a direct consequence of the exponential (infinite-order) convergence of spectral methods for problems with every smooth solutions. For heterogeneous problems where smooth solutions do not exist and exponential convergence is not observed with spectral methods, a spectral element method is proposed which does exhibit exponential convergence
The arbitrary order design code Tlie 1.0
International Nuclear Information System (INIS)
Zeijts, J. van; Neri, Filippo
1993-01-01
We describe the arbitrary order charged particle transfer map code TLIE. This code is a general 6D relativistic design code with a MAD compatible input language and among others implements user defined functions and subroutines and nested fitting and optimization. First we describe the mathematics and physics in the code. Aside from generating maps for all the standard accelerator elements we describe an efficient method for generating nonlinear transfer maps for realistic magnet models. We have implemented the method to arbitrary order in our accelerator design code for cylindrical current sheet magnets. We also have implemented a self-consistent space-charge approach as in CHARLIE. Subsequently we give a description of the input language and finally, we give several examples from productions run, such as cases with stacked multipoles with overlapping fringe fields. (Author)
COSY INFINITY, a new arbitrary order optics code
International Nuclear Information System (INIS)
Berz, M.
1990-01-01
The new arbitrary order particle optics and beam dynamics code COSY INFINITY is presented. The code is based on differential algebraic (DA) methods. COSY INFINITY has a full structured object oriented language environment. This provides a simple interface for the casual or novice user. At the same time, it offers the advanced user a very flexible and powerful tool for the utilization of DA. The power and generality of the environment is perhaps best demonstrated by the fact that the physics routines of COSY INFINITY are written in its own input language. The approach also facilitates the implementation of new features because special code generated by a user can be readily adopted to the source code. Besides being very compact in size, the code is also very fast, thanks to efficiently programmed elementary DA operations. For simple low order problems, which can be handled by conventional codes, the speed of COSY INFINITY is comparable and in certain cases even higher
A joint multi-view plus depth image coding scheme based on 3D-warping
DEFF Research Database (Denmark)
Zamarin, Marco; Zanuttigh, Pietro; Milani, Simone
2011-01-01
on the scene structure that can be effectively exploited to improve the performance of multi-view coding schemes. In this paper we introduce a novel coding architecture that replaces the inter-view motion prediction operation with a 3D warping approach based on depth information to improve the coding......Free viewpoint video applications and autostereoscopic displays require the transmission of multiple views of a scene together with depth maps. Current compression and transmission solutions just handle these two data streams as separate entities. However, depth maps contain key information...
Ranking of microRNA target prediction scores by Pareto front analysis.
Sahoo, Sudhakar; Albrecht, Andreas A
2010-12-01
Over the past ten years, a variety of microRNA target prediction methods has been developed, and many of the methods are constantly improved and adapted to recent insights into miRNA-mRNA interactions. In a typical scenario, different methods return different rankings of putative targets, even if the ranking is reduced to selected mRNAs that are related to a specific disease or cell type. For the experimental validation it is then difficult to decide in which order to process the predicted miRNA-mRNA bindings, since each validation is a laborious task and therefore only a limited number of mRNAs can be analysed. We propose a new ranking scheme that combines ranked predictions from several methods and - unlike standard thresholding methods - utilises the concept of Pareto fronts as defined in multi-objective optimisation. In the present study, we attempt a proof of concept by applying the new ranking scheme to hsa-miR-21, hsa-miR-125b, and hsa-miR-373 and prediction scores supplied by PITA and RNAhybrid. The scores are interpreted as a two-objective optimisation problem, and the elements of the Pareto front are ranked by the STarMir score with a subsequent re-calculation of the Pareto front after removal of the top-ranked mRNA from the basic set of prediction scores. The method is evaluated on validated targets of the three miRNA, and the ranking is compared to scores from DIANA-microT and TargetScan. We observed that the new ranking method performs well and consistent, and the first validated targets are elements of Pareto fronts at a relatively early stage of the recurrent procedure, which encourages further research towards a higher-dimensional analysis of Pareto fronts. Copyright © 2010 Elsevier Ltd. All rights reserved.
A Coding Scheme to Analyse the Online Asynchronous Discussion Forums of University Students
Biasutti, Michele
2017-01-01
The current study describes the development of a content analysis coding scheme to examine transcripts of online asynchronous discussion groups in higher education. The theoretical framework comprises the theories regarding knowledge construction in computer-supported collaborative learning (CSCL) based on a sociocultural perspective. The coding…
SRS: Site ranking system for hazardous chemical and radioactive waste
International Nuclear Information System (INIS)
Rechard, R.P.; Chu, M.S.Y.; Brown, S.L.
1988-05-01
This report describes the rationale and presents instructions for a site ranking system (SRS). SRS ranks hazardous chemical and radioactive waste sites by scoring important and readily available factors that influence risk to human health. Using SRS, sites can be ranked for purposes of detailed site investigations. SRS evaluates the relative risk as a combination of potentially exposed population, chemical toxicity, and potential exposure of release from a waste site; hence, SRS uses the same concepts found in a detailed assessment of health risk. Basing SRS on the concepts of risk assessment tends to reduce the distortion of results found in other ranking schemes. More importantly, a clear logic helps ensure the successful application of the ranking procedure and increases its versatility when modifications are necessary for unique situations. Although one can rank sites using a detailed risk assessment, it is potentially costly because of data and resources required. SRS is an efficient approach to provide an order-of-magnitude ranking, requiring only readily available data (often only descriptive) and hand calculations. Worksheets are included to make the system easier to understand and use. 88 refs., 19 figs., 58 tabs
Ranak, M S A Noman; Azad, Saiful; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z
2017-01-01
Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)-a.k.a., Force Touch in Apple's MacBook, Apple Watch, ZTE's Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on-is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme.
Ranak, M. S. A. Noman; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z.
2017-01-01
Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)—a.k.a., Force Touch in Apple’s MacBook, Apple Watch, ZTE’s Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on—is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme. PMID:29084262
Directory of Open Access Journals (Sweden)
M S A Noman Ranak
Full Text Available Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT-a.k.a., Force Touch in Apple's MacBook, Apple Watch, ZTE's Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on-is transformed into a new type of code, named Press Touch Code (PTC. We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme.
Numerical scheme of WAHA code for simulation of fast transients in piping systems
International Nuclear Information System (INIS)
Iztok Tiselj
2005-01-01
Full text of publication follows: A research project of the 5. EU program entitled 'Two-phase flow water hammer transients and induced loads on materials and structures of nuclear power plants' (WAHA loads) has been initiated in Fall 2000 and ended in Spring 2004. Numerical scheme used in WAHA code is responsibility of 'Jozef Stefan Institute and is briefly described in the present work. Mathematical model is based on a 6-equation two-fluid model for inhomogeneous non-equilibrium two-phase flow, which can be written in vectorial form as: A δΨ-vector/δt + B δΨ-vector/δx = S-vector. Hyperbolicity of the equations is a prerequisite and is ensured with virtual mass term and interfacial pressure term, however, equations are not unconditionally hyperbolic. Flow-regime map used in WAHA code consists of dispersed, and horizontally stratified flow correlations. The closure laws describe interface heat and mass transfer (condensation model, flashing...), the inter-phase friction, and wall friction. For the modeling of water hammer additional terms due to the pipe elasticity are considered. For the calculation of the thermodynamic state a new set of water properties subroutines was created. Numerical scheme of the WAHA code is based on Godunov characteristic upwind methods. Advanced numerical methods based on high-resolution shock-capturing schemes, which were originally developed for high-speed gas dynamics are used. These schemes produce solutions with a substantially reduced numerical diffusion and allow the accurate modeling of flow discontinuities. Code is using non-conservative variables Ψ-vector = (p, α, ν f , ν g , u f , u g ), however, according to current experience, the non-conservation is not a major problem for the fast transients like water hammers. The following operator splitting is used in the code: 1) Convection and non-relaxation source terms: A δΨ-vector/δt + B δΨ-vector/δx S-vector non relaxation 2) Relaxation (inter-phase exchange) source
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-04-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.
A General Symbolic PDE Solver Generator: Beyond Explicit Schemes
Directory of Open Access Journals (Sweden)
K. Sheshadri
2003-01-01
Full Text Available This paper presents an extension of our Mathematica- and MathCode-based symbolic-numeric framework for solving a variety of partial differential equation (PDE problems. The main features of our earlier work, which implemented explicit finite-difference schemes, include the ability to handle (1 arbitrary number of dependent variables, (2 arbitrary dimensionality, and (3 arbitrary geometry, as well as (4 developing finite-difference schemes to any desired order of approximation. In the present paper, extensions of this framework to implicit schemes and the method of lines are discussed. While C++ code is generated, using the MathCode system for the implicit method, Modelica code is generated for the method of lines. The latter provides a preliminary PDE support for the Modelica language. Examples illustrating the various aspects of the solver generator are presented.
Shams, Rifat Ara; Kabir, M. Hasnat; Ullah, Sheikh Enayet
2012-01-01
In this paper, the impact of Forward Error Correction (FEC) code namely Trellis code with interleaver on the performance of wavelet based MC-CDMA wireless communication system with the implementation of Alamouti antenna diversity scheme has been investigated in terms of Bit Error Rate (BER) as a function of Signal-to-Noise Ratio (SNR) per bit. Simulation of the system under proposed study has been done in M-ary modulation schemes (MPSK, MQAM and DPSK) over AWGN and Rayleigh fading channel inc...
International Nuclear Information System (INIS)
Resnik, W.M. II; Bosler, G.E.
1977-09-01
Many current reactor physics codes accept cross-section libraries in an isotope-ordered form, convert them with internal preprocessing routines to a group-ordered form, and then perform calculations using these group-ordered data. Occasionally, because of storage and time limitations, the preprocessing routines in these codes cannot convert very large multigroup isotope-ordered libraries. For this reason, the I2G code, i.e., ISOTXS to GRUPXS, was written to convert externally isotope-ordered cross section libraries in the standard file format called ISOTXS to group-ordered libraries in the standard format called GRUPXS. This code uses standardized multilevel data management routines which establish a strategy for the efficient conversion of large libraries. The I2G code is exportable contingent on access to, and an intimate familiarization with, the multilevel routines. These routines are machine dependent, and therefore must be provided by the importing facility. 6 figures, 3 tables
International Nuclear Information System (INIS)
Singh, K.
1993-11-01
Using a statistical mechanical perturbation theory for isotropic-nematic transition we report a calculation of second and fourth rank orientation order parameters and thermodynamic properties for a model system of prolate ellipsoids of revolution parameterized by its length-to-width ratio. The influence of attractive potential represented by dispersion interaction on a variety of thermodynamic properties is analysed. Inclusion of fourth rank orientational order parameter in calculation slightly changes the transition parameter. (author). 7 refs, 1 tab
Evidence for modality-independent order coding in working memory.
Depoorter, Ann; Vandierendonck, André
2009-03-01
The aim of the present study was to investigate the representation of serial order in working memory, more specifically whether serial order is coded by means of a modality-dependent or a modality-independent order code. This was investigated by means of a series of four experiments based on a dual-task methodology in which one short-term memory task was embedded between the presentation and recall of another short-term memory task. Two aspects were varied in these memory tasks--namely, the modality of the stimulus materials (verbal or visuo-spatial) and the presence of an order component in the task (an order or an item memory task). The results of this study showed impaired primary-task recognition performance when both the primary and the embedded task included an order component, irrespective of the modality of the stimulus materials. If one or both of the tasks did not contain an order component, less interference was found. The results of this study support the existence of a modality-independent order code.
International Nuclear Information System (INIS)
Zhou, Lei; Luo, Kai Hong; Qin, Wenjin; Jia, Ming; Shuai, Shi Jin
2015-01-01
Highlights: • MUSCL differencing scheme in LES method is used to investigate liquid fuel spray and combustion process. • Using MUSCL can accurately capture the gas phase velocity distribution and liquid spray features. • Detailed chemistry mechanism with a parallel algorithm was used to calculate combustion process. • Increasing oxygen concentration can decrease ignition delay time and flame LOL. - Abstract: The accuracy of large eddy simulation (LES) for turbulent combustion depends on suitably implemented numerical schemes and chemical mechanisms. In the original KIVA3V code, finite difference schemes such as QSOU (Quasi-second-order upwind) and PDC (Partial Donor Cell Differencing) cannot achieve good results or even computational stability when using coarse grids due to large numerical diffusion. In this paper, the MUSCL (Monotone Upstream-centered Schemes for Conservation Laws) differencing scheme is implemented into KIVA3V-LES code to calculate the convective term. In the meantime, Lu’s n-heptane reduced 58-species mechanisms (Lu, 2011) is used to calculate chemistry with a parallel algorithm. Finally, improved models for spray injection are also employed. With these improvements, the KIVA3V-LES code is renamed as KIVALES-CP (Chemistry with Parallel algorithm) in this study. The resulting code was used to study the gas–liquid two phase jet and combustion under various diesel engine-like conditions in a constant volume vessel. The results show that using the MUSCL scheme can accurately capture the spray shape and fuel vapor penetration using even a coarse grid, in comparison with the Sandia experimental data. Similarly good results are obtained for three single-component fuels, i-Octane (C8H18), n-Dodecanese (C12H26), and n-Hexadecane (C16H34) with very different physical properties. Meanwhile the improved methodology is able to accurately predict ignition delay and flame lift-off length (LOL) under different oxygen concentrations from 10% to 21
A strong shock tube problem calculated by different numerical schemes
Lee, Wen Ho; Clancy, Sean P.
1996-05-01
Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressure ratio of 109 and density ratio of 103 in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods.
DEFF Research Database (Denmark)
Martins, Bo; Forchhammer, Søren
1998-01-01
Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional...... is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult...... images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding...
Efficient nonrigid registration using ranked order statistics
DEFF Research Database (Denmark)
Tennakoon, Ruwan B.; Bab-Hadiashar, Alireza; de Bruijne, Marleen
2013-01-01
of research. In this paper we propose a fast and accurate non-rigid registration method for intra-modality volumetric images. Our approach exploits the information provided by an order statistics based segmentation method, to find the important regions for registration and use an appropriate sampling scheme......Non-rigid image registration techniques are widely used in medical imaging applications. Due to high computational complexities of these techniques, finding appropriate registration method to both reduce the computation burden and increase the registration accuracy has become an intense area...... to target those areas and reduce the registration computation time. A unique advantage of the proposed method is its ability to identify the point of diminishing returns and stop the registration process. Our experiments on registration of real lung CT images, with expert annotated landmarks, show...
High Order Differential Frequency Hopping: Design and Analysis
Directory of Open Access Journals (Sweden)
Yong Li
2015-01-01
Full Text Available This paper considers spectrally efficient differential frequency hopping (DFH system design. Relying on time-frequency diversity over large spectrum and high speed frequency hopping, DFH systems are robust against hostile jamming interference. However, the spectral efficiency of conventional DFH systems is very low due to only using the frequency of each channel. To improve the system capacity, in this paper, we propose an innovative high order differential frequency hopping (HODFH scheme. Unlike in traditional DFH where the message is carried by the frequency relationship between the adjacent hops using one order differential coding, in HODFH, the message is carried by the frequency and phase relationship using two-order or higher order differential coding. As a result, system efficiency is increased significantly since the additional information transmission is achieved by the higher order differential coding at no extra cost on either bandwidth or power. Quantitative performance analysis on the proposed scheme demonstrates that transmission through the frequency and phase relationship using two-order or higher order differential coding essentially introduces another dimension to the signal space, and the corresponding coding gain can increase the system efficiency.
High-order asynchrony-tolerant finite difference schemes for partial differential equations
Aditya, Konduri; Donzis, Diego A.
2017-12-01
Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.
Validation of thermalhydraulic codes
International Nuclear Information System (INIS)
Wilkie, D.
1992-01-01
Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
Stoenescu, Tudor M.; Woo, Simon S.
2009-01-01
In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
International Nuclear Information System (INIS)
Kim, Jong Tae; Ha, Kwang Soon; Kim, Hwan Yeol; Park, Rae Joon; Song, Jin Ho
2010-01-01
, unsteady turbulence models based on filtered or volume-averaged governing equations have been applied for the turbulent natural convection heat transfer. Tran et al. used large eddy simulation (LES) for the analysis of molten corium coolability. The numerical instability is related to a gravitational force of the molten corium. A staggered grid method on an orthogonal structured grid is used to prohibit a pressure oscillation in the numerical solution. But it is impractical to use the structured grid for a partially filled spherical pool, a cone-type pool or triangular pool. An unstructured grid is an alternative for the nonrectangular pools. In order to remove the checkerboard- like pressure oscillation on the unstructured grid, some special interpolation scheme is required. In order to evaluate in-vessel coolability of the molten corium for a pressurized water reactor (PWR), thermo-hydraulic analysis code LILAC had been developed. LILAC has a capability of multi-layered conjugate heat transfer with melt solidification. A solution domain can be 2-dimensional, axisymmetric, and 3-dimensional. LILAC is based on the unstructured mesh technology to discretized non-rectangular pool geometry. Because of too limited man-power to maintain the code, it becomes more and more difficult to implement new physical and numerical models in the code along with increased complication of the code. Recently, open source CFD code OpenFOAM has been released and applied to many academic and engineering areas. OpenFOAM is based on the very similar numerical schemes to the LILAC code. It has many physical and numerical models for multi-physics analysis. And because it is based on object-oriented programming, it is known that new models can be easily implemented and is very fast with a lower possibility of coding errors. This is a very attractive feature for the development, validation and maintenance of an analysis code. On the contrary to commercial CFD codes, it is possible to modify and add
Ordering schemes for parallel processing of certain mesh problems
International Nuclear Information System (INIS)
O'Leary, D.
1984-01-01
In this work, some ordering schemes for mesh points are presented which enable algorithms such as the Gauss-Seidel or SOR iteration to be performed efficiently for the nine-point operator finite difference method on computers consisting of a two-dimensional grid of processors. Convergence results are presented for the discretization of u /SUB xx/ + u /SUB yy/ on a uniform mesh over a square, showing that the spectral radius of the iteration for these orderings is no worse than that for the standard row by row ordering of mesh points. Further applications of these mesh point orderings to network problems, more general finite difference operators, and picture processing problems are noted
Ship detection in satellite imagery using rank-order greyscale hit-or-miss transforms
Energy Technology Data Exchange (ETDEWEB)
Harvey, Neal R [Los Alamos National Laboratory; Porter, Reid B [Los Alamos National Laboratory; Theiler, James [Los Alamos National Laboratory
2010-01-01
Ship detection from satellite imagery is something that has great utility in various communities. Knowing where ships are and their types provides useful intelligence information. However, detecting and recognizing ships is a difficult problem. Existing techniques suffer from too many false-alarms. We describe approaches we have taken in trying to build ship detection algorithms that have reduced false alarms. Our approach uses a version of the grayscale morphological Hit-or-Miss transform. While this is well known and used in its standard form, we use a version in which we use a rank-order selection for the dilation and erosion parts of the transform, instead of the standard maximum and minimum operators. This provides some slack in the fitting that the algorithm employs and provides a method for tuning the algorithm's performance for particular detection problems. We describe our algorithms, show the effect of the rank-order parameter on the algorithm's performance and illustrate the use of this approach for real ship detection problems with panchromatic satellite imagery.
A strong shock tube problem calculated by different numerical schemes
International Nuclear Information System (INIS)
Lee, W.H.; Clancy, S.P.
1996-01-01
Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressure ratio of 10 9 and density ratio of 10 3 in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods. copyright 1996 American Institute of Physics
A Novel Cooperation-Based Network Coding Scheme for Walking Scenarios in WBANs
Directory of Open Access Journals (Sweden)
Hongyun Zhang
2017-01-01
Full Text Available In Wireless Body Area Networks (WBANs, the tradeoff between network throughput and energy efficiency remains a key challenge. Most current transmission schemes try to cope with the challenge from the perspective of general Wireless Sensor Networks (WSNs, which may not take the peculiarities of WBAN channels into account. In this paper, we take advantage of the correlation of on-body channels in walking scenarios to achieve a better tradeoff between throughput and energy consumption. We first analyze the characteristics of on-body channels based on realistic channel gain datasets, which are collected by our customized wireless transceivers in walking scenarios. The analytical results confirm the rationale of our newly proposed transmission scheme A3NC, which explores the combination of the aggregative allocation (AA mechanism in MAC layer and the Analog Network Coding (ANC technique in PHY layer. Both theoretical analyses and simulation results show that the A3NC scheme achieves significant improvement in upload throughput and energy efficiency, compared to the conventional approaches.
An overview of J estimation schemes developed for the RSE-M code
International Nuclear Information System (INIS)
Delliou, Patrick Le; Sermage, Jean-Philippe; Barthelet, Bruno; Michel, Bruno; Gilles, Philippe
2003-01-01
The RSE-M Code provides rules and requirements for in-service inspection of French Pressurized Water Reactor power plant components. The RSE-M Code gives non mandatory guidance for analytical evaluation of flaws. To calculate the stress intensity factors in pipes and shells containing semi-elliptical surface defects, influence coefficients are given for a wide range of geometrical parameters. To calculate the J integral for surface cracks in pipes and elbows, simplified methods have been developed for mechanical loads (in-plane bending and torsion moments, pressure), thermal loads as well as for the combination of these loads. This paper presents an overview of the J-estimation schemes presently available: a circumferential surface crack in a straight pipe (already included in the 2000 Addenda of the Code), a circumferential surface crack in a tapered transition, a longitudinal surface crack in a straight pipe, a longitudinal surface crack in the mid-section of an elbow. (author)
Effectiveness of journal ranking schemes as a tool for locating information.
Directory of Open Access Journals (Sweden)
Michael J Stringer
Full Text Available BACKGROUND: The rise of electronic publishing, preprint archives, blogs, and wikis is raising concerns among publishers, editors, and scientists about the present day relevance of academic journals and traditional peer review. These concerns are especially fuelled by the ability of search engines to automatically identify and sort information. It appears that academic journals can only remain relevant if acceptance of research for publication within a journal allows readers to infer immediate, reliable information on the value of that research. METHODOLOGY/PRINCIPAL FINDINGS: Here, we systematically evaluate the effectiveness of journals, through the work of editors and reviewers, at evaluating unpublished research. We find that the distribution of the number of citations to a paper published in a given journal in a specific year converges to a steady state after a journal-specific transient time, and demonstrate that in the steady state the logarithm of the number of citations has a journal-specific typical value. We then develop a model for the asymptotic number of citations accrued by papers published in a journal that closely matches the data. CONCLUSIONS/SIGNIFICANCE: Our model enables us to quantify both the typical impact and the range of impacts of papers published in a journal. Finally, we propose a journal-ranking scheme that maximizes the efficiency of locating high impact research.
Multimodal biometric system using rank-level fusion approach.
Monwar, Md Maruf; Gavrilova, Marina L
2009-08-01
In many real-world applications, unimodal biometric systems often face significant limitations due to sensitivity to noise, intraclass variability, data quality, nonuniversality, and other factors. Attempting to improve the performance of individual matchers in such situations may not prove to be highly effective. Multibiometric systems seek to alleviate some of these problems by providing multiple pieces of evidence of the same identity. These systems help achieve an increase in performance that may not be possible using a single-biometric indicator. This paper presents an effective fusion scheme that combines information presented by multiple domain experts based on the rank-level fusion integration method. The developed multimodal biometric system possesses a number of unique qualities, starting from utilizing principal component analysis and Fisher's linear discriminant methods for individual matchers (face, ear, and signature) identity authentication and utilizing the novel rank-level fusion method in order to consolidate the results obtained from different biometric matchers. The ranks of individual matchers are combined using the highest rank, Borda count, and logistic regression approaches. The results indicate that fusion of individual modalities can improve the overall performance of the biometric system, even in the presence of low quality data. Insights on multibiometric design using rank-level fusion and its performance on a variety of biometric databases are discussed in the concluding section.
Designing synchronization schemes for chaotic fractional-order unified systems
International Nuclear Information System (INIS)
Wang Junwei; Zhang Yanbin
2006-01-01
Synchronization in chaotic fractional-order differential systems is studied both theoretically and numerically. Two schemes are designed to achieve chaos synchronization of so-called unified chaotic systems and the corresponding numerical algorithms are established. Some sufficient conditions on synchronization are also derived based on the Laplace transformation theory. Computer simulations are used for demonstration
Institute of Scientific and Technical Information of China (English)
YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang
2001-01-01
Based on "capacity rule", the perfor-mance of multilevel coding (MLC) schemes with dif-ferent set partitioning strategies and decoding meth-ods in AWGN and Rayleigh fading channels is investi-gated, in which BCH codes are chosen as componentcodes and 8ASK modulation is used. Numerical re-sults indicate that MLC scheme with UP strategy canobtain optimal performance in AWGN channels andBP is the best mapping strategy for Rayleigh fadingchannels. BP strategy is of good robustness in bothkinds of channels to realize an optimum MLC system.Multistage decoding (MSD) is a sub-optimal decodingmethod of MLC for both channels. For Ungerboeckpartitioning (UP) and mixed partitioning (MP) strat-egy, MSD is strongly recommended to use for MLCsystem, while for BP strategy, PDL is suggested to useas a simple decoding method compared with MSD.
PageRank tracker: from ranking to tracking.
Gong, Chen; Fu, Keren; Loza, Artur; Wu, Qiang; Liu, Jia; Yang, Jie
2014-06-01
Video object tracking is widely used in many real-world applications, and it has been extensively studied for over two decades. However, tracking robustness is still an issue in most existing methods, due to the difficulties with adaptation to environmental or target changes. In order to improve adaptability, this paper formulates the tracking process as a ranking problem, and the PageRank algorithm, which is a well-known webpage ranking algorithm used by Google, is applied. Labeled and unlabeled samples in tracking application are analogous to query webpages and the webpages to be ranked, respectively. Therefore, determining the target is equivalent to finding the unlabeled sample that is the most associated with existing labeled set. We modify the conventional PageRank algorithm in three aspects for tracking application, including graph construction, PageRank vector acquisition and target filtering. Our simulations with the use of various challenging public-domain video sequences reveal that the proposed PageRank tracker outperforms mean-shift tracker, co-tracker, semiboosting and beyond semiboosting trackers in terms of accuracy, robustness and stability.
Progress in the Development of J Estimation Schemes for the RSE-M Code
International Nuclear Information System (INIS)
Le Delliou, Patrick; Sermage, Jean-Philippe; Cambefort, Pierre; Barthelet, Bruno; Gilles, Philippe; Michel, Bruno
2002-01-01
The RSE-M Code provides rules and requirements for in-service inspection of French Pressurized Water Reactor power plant components. The Code gives non mandatory guidance for analytical evaluation of flaws. To calculate the stress intensity factors in pipes and shells containing semi-elliptical surface defects, influence coefficients are given for a wide range of geometrical parameters. To calculate the J integral for a circumferential surface crack in a straight pipe, simplified methods are available in the present version of the Code (2000 Addenda) for mechanical loads (in-plane bending and torsion moments, pressure), thermal loads as well as for the combination of these loads. This paper presents the recent advances in the development of J-estimation schemes for two configurations: a longitudinal surface crack in a straight pipe, a longitudinal surface crack in the mid-section of an elbow. (authors)
White, Jeffrey A.; Baurle, Robert A.; Fisher, Travis C.; Quinlan, Jesse R.; Black, William S.
2012-01-01
The 2nd-order upwind inviscid flux scheme implemented in the multi-block, structured grid, cell centered, finite volume, high-speed reacting flow code VULCAN has been modified to reduce numerical dissipation. This modification was motivated by the desire to improve the codes ability to perform large eddy simulations. The reduction in dissipation was accomplished through a hybridization of non-dissipative and dissipative discontinuity-capturing advection schemes that reduces numerical dissipation while maintaining the ability to capture shocks. A methodology for constructing hybrid-advection schemes that blends nondissipative fluxes consisting of linear combinations of divergence and product rule forms discretized using 4th-order symmetric operators, with dissipative, 3rd or 4th-order reconstruction based upwind flux schemes was developed and implemented. A series of benchmark problems with increasing spatial and fluid dynamical complexity were utilized to examine the ability of the candidate schemes to resolve and propagate structures typical of turbulent flow, their discontinuity capturing capability and their robustness. A realistic geometry typical of a high-speed propulsion system flowpath was computed using the most promising of the examined schemes and was compared with available experimental data to demonstrate simulation fidelity.
International Nuclear Information System (INIS)
Delfin L, A.; Alonso V, G.; Valle G, E. del
2003-01-01
In this work two nodal schemes of finite element are presented, one of second and the other of third order of accurate that allow to determine the radial distribution of power starting from the corresponding reactivities.The schemes here developed were obtained taking as starting point the equation developed by Driscoll et al, the one which is based on the diffusion approach of 1-1/2 energy groups. This equation relates the power fraction of an assemble with their reactivity and with the power fractions and reactivities of the assemblies that its surround it. Driscoll and collaborators they solve in form approximate such equation supposing that the reactivity of each assemble it is but a lineal function of the burnt one of the fuel. The spatial approach carries out it with the classic technique of finite differences centered in mesh. Nevertheless that the algebraic system to which its arrive it can be solved without more considerations introduce some additional suppositions and adjustment parameters that it allows them to predict results comparable to those contributed by three dimensions analysis and this way to reduce the one obtained error when its compare their results with those of a production code like CASMO. Also in the two schemes that here are presented the same approaches of Driscoll were used being obtained errors of the one 10% and of 5% for the second schemes and third order respectively for a test case that it was built starting from data of the Cycle 1 of the Unit 1 of the Laguna Verde Nucleo electric plant. These errors its were obtained when comparing with a computer program based on the matrix response method. It is sought to have this way a quick and efficient tool for the multicycle analysis in the fuel management. However, this model presents problems in the appropriate prediction of the average burnt of the nucleus and of the burnt one by lot. (Author)
A third-order moving mesh cell-centered scheme for one-dimensional elastic-plastic flows
Cheng, Jun-Bo; Huang, Weizhang; Jiang, Song; Tian, Baolin
2017-11-01
A third-order moving mesh cell-centered scheme without the remapping of physical variables is developed for the numerical solution of one-dimensional elastic-plastic flows with the Mie-Grüneisen equation of state, the Wilkins constitutive model, and the von Mises yielding criterion. The scheme combines the Lagrangian method with the MMPDE moving mesh method and adaptively moves the mesh to better resolve shock and other types of waves while preventing the mesh from crossing and tangling. It can be viewed as a direct arbitrarily Lagrangian-Eulerian method but can also be degenerated to a purely Lagrangian scheme. It treats the relative velocity of the fluid with respect to the mesh as constant in time between time steps, which allows high-order approximation of free boundaries. A time dependent scaling is used in the monitor function to avoid possible sudden movement of the mesh points due to the creation or diminishing of shock and rarefaction waves or the steepening of those waves. A two-rarefaction Riemann solver with elastic waves is employed to compute the Godunov values of the density, pressure, velocity, and deviatoric stress at cell interfaces. Numerical results are presented for three examples. The third-order convergence of the scheme and its ability to concentrate mesh points around shock and elastic rarefaction waves are demonstrated. The obtained numerical results are in good agreement with those in literature. The new scheme is also shown to be more accurate in resolving shock and rarefaction waves than an existing third-order cell-centered Lagrangian scheme.
Universality of rank-ordering distributions in the arts and sciences.
Directory of Open Access Journals (Sweden)
Gustavo Martínez-Mekler
Full Text Available Searching for generic behaviors has been one of the driving forces leading to a deep understanding and classification of diverse phenomena. Usually a starting point is the development of a phenomenology based on observations. Such is the case for power law distributions encountered in a wealth of situations coming from physics, geophysics, biology, lexicography as well as social and financial networks. This finding is however restricted to a range of values outside of which finite size corrections are often invoked. Here we uncover a universal behavior of the way in which elements of a system are distributed according to their rank with respect to a given property, valid for the full range of values, regardless of whether or not a power law has previously been suggested. We propose a two parameter functional form for these rank-ordered distributions that gives excellent fits to an impressive amount of very diverse phenomena, coming from the arts, social and natural sciences. It is a discrete version of a generalized beta distribution, given by f(r = A(N+1-r(b/r(a, where r is the rank, N its maximum value, A the normalization constant and (a, b two fitting exponents. Prompted by our genetic sequence observations we present a growth probabilistic model incorporating mutation-duplication features that generates data complying with this distribution. The competition between permanence and change appears to be a relevant, though not necessary feature. Additionally, our observations mainly of social phenomena suggest that a multifactorial quality resulting from the convergence of several heterogeneous underlying processes is an important feature. We also explore the significance of the distribution parameters and their classifying potential. The ubiquity of our findings suggests that there must be a fundamental underlying explanation, most probably of a statistical nature, such as an appropriate central limit theorem formulation.
Calatroni, Luca
2013-08-01
We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the H -1-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation.
Calatroni, Luca; Dü ring, Bertram; Schö nlieb, Carola-Bibiane
2013-01-01
We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the H -1-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation.
Energy Technology Data Exchange (ETDEWEB)
Ismagilov, Timur Z., E-mail: ismagilov@academ.org
2015-02-01
This paper presents a second order finite volume scheme for numerical solution of Maxwell's equations with discontinuous dielectric permittivity and magnetic permeability on unstructured meshes. The scheme is based on Godunov scheme and employs approaches of Van Leer and Lax–Wendroff to increase the order of approximation. To keep the second order of approximation near dielectric permittivity and magnetic permeability discontinuities a novel technique for gradient calculation and limitation is applied near discontinuities. Results of test computations for problems with linear and curvilinear discontinuities confirm second order of approximation. The scheme was applied to modelling propagation of electromagnetic waves inside photonic crystal waveguides with a bend.
Order information coding in working memory: Review of behavioural studies and cognitive mechanisms
Directory of Open Access Journals (Sweden)
Barbara Dolenc
2014-06-01
Full Text Available Executive processes, such as coding for sequential order, are of extreme importance for higher-order cognitive tasks. One of the significant questions is, how order information is coded in working memory and what cognitive mechanisms and processes mediate it. The aim of this review paper is to summarize results of studies that explore whether order and item memory are two separable processes. Furthermore, we reviewed evidence for each of the proposed cognitive mechanism that might mediate order processing. Previous behavioural and neuroimaging data suggest different representation and processing of item and order information in working memory. Both information are maintained and recalled separately and this separation seems to hold for recognition as well as for recall. To explain the result of studies of order coding, numerous cognitive mechanisms were proposed. We focused on four different mechanisms by which order information might be coded and retrieved, namely inter-item associations, direct coding, hierarchical coding and magnitude coding. Each of the mechanisms can explain some of the aspect of order information coding, however none of them is able to explain all of the empirical findings. Due to its complex nature it is not surprising that a single mechanism has difficulties accounting for all the behavioral data and order memory may be more accurately characterized as the result of a set of mechanisms rather than a single one. Moreover, the findings beget a question of whether different types of memory for order information might exist.
Theoretical scheme of thermal-light many-ghost imaging by Nth-order intensity correlation
International Nuclear Information System (INIS)
Liu Yingchuan; Kuang Leman
2011-01-01
In this paper, we propose a theoretical scheme of many-ghost imaging in terms of Nth-order correlated thermal light. We obtain the Gaussian thin lens equations in the many-ghost imaging protocol. We show that it is possible to produce N-1 ghost images of an object at different places in a nonlocal fashion by means of a higher order correlated imaging process with an Nth-order correlated thermal source and correlation measurements. We investigate the visibility of the ghost images in the scheme and obtain the upper bounds of the visibility for the Nth-order correlated thermal-light ghost imaging. It is found that the visibility of the ghost images can be dramatically enhanced when the order of correlation becomes larger. It is pointed out that the many-ghost imaging phenomenon is an observable physical effect induced by higher order coherence or higher order correlations of optical fields.
Periphony-Lattice Mixed-Order Ambisonic Scheme for Spherical Microphone Arrays
DEFF Research Database (Denmark)
Chang, Jiho; Marschall, Marton
2018-01-01
to performance that is independent of the incident direction of the sound waves. On the other hand, mixed-order ambisonic (MOA) schemes that select an appropriate subset of spherical harmonics can improve the performance for horizontal directions at the expense of other directions. This paper proposes an MOA......Most methods for sound field reconstruction and spherical beamforming with spherical microphone arrays are mathematically based on the spherical harmonics expansion. In many cases, this expansion is truncated at a certain order as in higher order ambisonics (HOA). This truncation leads...
El Gharamti, Mohamad; Hoteit, Ibrahim
2014-01-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
El Gharamti, Mohamad
2014-02-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
Rice, R. F.; Lee, J. J.
1986-01-01
Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.
VaRank: a simple and powerful tool for ranking genetic variants
Directory of Open Access Journals (Sweden)
Véronique Geoffroy
2015-03-01
Full Text Available Background. Most genetic disorders are caused by single nucleotide variations (SNVs or small insertion/deletions (indels. High throughput sequencing has broadened the catalogue of human variation, including common polymorphisms, rare variations or disease causing mutations. However, identifying one variation among hundreds or thousands of others is still a complex task for biologists, geneticists and clinicians.Results. We have developed VaRank, a command-line tool for the ranking of genetic variants detected by high-throughput sequencing. VaRank scores and prioritizes variants annotated either by Alamut Batch or SnpEff. A barcode allows users to quickly view the presence/absence of variants (with homozygote/heterozygote status in analyzed samples. VaRank supports the commonly used VCF input format for variants analysis thus allowing it to be easily integrated into NGS bioinformatics analysis pipelines. VaRank has been successfully applied to disease-gene identification as well as to molecular diagnostics setup for several hundred patients.Conclusions. VaRank is implemented in Tcl/Tk, a scripting language which is platform-independent but has been tested only on Unix environment. The source code is available under the GNU GPL, and together with sample data and detailed documentation can be downloaded from http://www.lbgi.fr/VaRank/.
Reweighted Low-Rank Tensor Completion and its Applications in Video Recovery
M., Baburaj; George, Sudhish N.
2016-01-01
This paper focus on recovering multi-dimensional data called tensor from randomly corrupted incomplete observation. Inspired by reweighted $l_1$ norm minimization for sparsity enhancement, this paper proposes a reweighted singular value enhancement scheme to improve tensor low tubular rank in the tensor completion process. An efficient iterative decomposition scheme based on t-SVD is proposed which improves low-rank signal recovery significantly. The effectiveness of the proposed method is es...
Development of a domestically-made system code
Energy Technology Data Exchange (ETDEWEB)
NONE
2013-08-15
According to lessons learned from the Fukushima-Daiichi NPP accidents, a new safety standard based on state-of-the-art findings has been established by the Japanese Nuclear Regulation Authority (NRA) and will soon come into force in Japan. In order to ensure a precise response to this movement from a technological point of view, it should be required for safety regulation to develop a new system code with much smaller uncertainty and reinforced simulation capability even in application to beyond-DBAs (BDBAs), as well as with the capability of close coupling to a newly developing severe accident code. Accordingly, development of a new domestically-made system code that incorporates 3-dimensional and 3 or more fluid thermal-hydraulics in tandem with a 3-dimensional neutronics has been started in 2012. In 2012, two branches of development activities, the development of 'main body' and advanced features have been started in parallel for development efficiency. The main body has been started from scratch and the following activities have therefore been performed: 1) development and determination of key principles and methodologies to realize a flexible, extensible and robust platform, 2) determination of requirements definition, 3) start of basic program design and coding and 4) start of a development of prototypical GUI-based pre-post processor. As for the advanced features, the following activities have been performed: 1) development of Phenomena Identification and Ranking Tables (PIRTs) and model capability matrix from normal operations to BDBAs in order to address requirements definition for advanced modeling, 2) development of detailed action plan for modification of field equations, numerical schemes and solvers and 3) start of the program development of field equations with an interfacial area concentration transport equation, a robust solver for condensation induced water hammer phenomena and a versatile Newton-Raphson solver. (author)
Multichannel Filtered-X Error Coded Affine Projection-Like Algorithm with Evolving Order
Directory of Open Access Journals (Sweden)
J. G. Avalos
2017-01-01
Full Text Available Affine projection (AP algorithms are commonly used to implement active noise control (ANC systems because they provide fast convergence. However, their high computational complexity can restrict their use in certain practical applications. The Error Coded Affine Projection-Like (ECAP-L algorithm has been proposed to reduce the computational burden while maintaining the speed of AP, but no version of this algorithm has been derived for active noise control, for which the adaptive structures are very different from those of other configurations. In this paper, we introduce a version of the ECAP-L for single-channel and multichannel ANC systems. The proposed algorithm is implemented using the conventional filtered-x scheme, which incurs a lower computational cost than the modified filtered-x structure, especially for multichannel systems. Furthermore, we present an evolutionary method that dynamically decreases the projection order in order to reduce the dimensions of the matrix used in the algorithm’s computations. Experimental results demonstrate that the proposed algorithm yields a convergence speed and a final residual error similar to those of AP algorithms. Moreover, it achieves meaningful computational savings, leading to simpler hardware implementation of real-time ANC applications.
Comparing classical and quantum PageRanks
Loke, T.; Tang, J. W.; Rodriguez, J.; Small, M.; Wang, J. B.
2017-01-01
Following recent developments in quantum PageRanking, we present a comparative analysis of discrete-time and continuous-time quantum-walk-based PageRank algorithms. Relative to classical PageRank and to different extents, the quantum measures better highlight secondary hubs and resolve ranking degeneracy among peripheral nodes for all networks we studied in this paper. For the discrete-time case, we investigated the periodic nature of the walker's probability distribution for a wide range of networks and found that the dominant period does not grow with the size of these networks. Based on this observation, we introduce a new quantum measure using the maximum probabilities of the associated walker during the first couple of periods. This is particularly important, since it leads to a quantum PageRanking scheme that is scalable with respect to network size.
Discriminative Multi-View Interactive Image Re-Ranking.
Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng
2017-07-01
Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.
Ensemble Manifold Rank Preserving for Acceleration-Based Human Activity Recognition.
Tao, Dapeng; Jin, Lianwen; Yuan, Yuan; Xue, Yang
2016-06-01
With the rapid development of mobile devices and pervasive computing technologies, acceleration-based human activity recognition, a difficult yet essential problem in mobile apps, has received intensive attention recently. Different acceleration signals for representing different activities or even a same activity have different attributes, which causes troubles in normalizing the signals. We thus cannot directly compare these signals with each other, because they are embedded in a nonmetric space. Therefore, we present a nonmetric scheme that retains discriminative and robust frequency domain information by developing a novel ensemble manifold rank preserving (EMRP) algorithm. EMRP simultaneously considers three aspects: 1) it encodes the local geometry using the ranking order information of intraclass samples distributed on local patches; 2) it keeps the discriminative information by maximizing the margin between samples of different classes; and 3) it finds the optimal linear combination of the alignment matrices to approximate the intrinsic manifold lied in the data. Experiments are conducted on the South China University of Technology naturalistic 3-D acceleration-based activity dataset and the naturalistic mobile-devices based human activity dataset to demonstrate the robustness and effectiveness of the new nonmetric scheme for acceleration-based human activity recognition.
FBC: a flat binary code scheme for fast Manhattan hash retrieval
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin
2018-01-01
The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
Coding with partially hidden Markov models
DEFF Research Database (Denmark)
Forchhammer, Søren; Rissanen, J.
1995-01-01
Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...
Solution of Euler unsteady equations using a second order numerical scheme
International Nuclear Information System (INIS)
Devos, J.P.
1992-08-01
In thermal power plants, the steam circuits experience incidents due to the noise and vibration induced by trans-sonic flow. In these configurations, the compressible fluid can be considered the perfect ideal. Euler equations therefore constitute a good model. However, processing of the discontinuities induced by the shockwaves are a particular problem. We give a bibliographical synthesis of the work done on this subject. The research by Roe and Harten leads to TVD (Total Variation Decreasing) type schemes. These second order schemes generate no oscillation and converge towards physically acceptable weak solutions. (author). 12 refs
Relaxation approximations to second-order traffic flow models by high-resolution schemes
International Nuclear Information System (INIS)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.
2015-01-01
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers
LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources
Directory of Open Access Journals (Sweden)
Javier Garcia-Frias
2005-05-01
Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.
DEFF Research Database (Denmark)
Misaridis, Thanasis; Jensen, Jørgen Arendt
1999-01-01
This paper presents a coded excitation imaging system based on a predistorted FM excitation and a digital compression filter designed for medical ultrasonic applications, in order to preserve both axial resolution and contrast. In radars, optimal Chebyshev windows efficiently weight a nearly...... as with pulse excitation (about 1.5 lambda), depending on the filter design criteria. The axial sidelobes are below -40 dB, which is the noise level of the measuring imaging system. The proposed excitation/compression scheme shows good overall performance and stability to the frequency shift due to attenuation...... be removed by weighting. We show that by using a predistorted chirp with amplitude or phase shaping for amplitude ripple reduction and a correlation filter that accounts for the transducer's natural frequency weighting, output sidelobe levels of -35 to -40 dB are directly obtained. When an optimized filter...
An Efficient Homomorphic Aggregate Signature Scheme Based on Lattice
Directory of Open Access Journals (Sweden)
Zhengjun Jing
2014-01-01
Full Text Available Homomorphic aggregate signature (HAS is a linearly homomorphic signature (LHS for multiple users, which can be applied for a variety of purposes, such as multi-source network coding and sensor data aggregation. In order to design an efficient postquantum secure HAS scheme, we borrow the idea of the lattice-based LHS scheme over binary field in the single-user case, and develop it into a new lattice-based HAS scheme in this paper. The security of the proposed scheme is proved by showing a reduction to the single-user case and the signature length remains invariant. Compared with the existing lattice-based homomorphic aggregate signature scheme, our new scheme enjoys shorter signature length and high efficiency.
Adiabatic quantum algorithm for search engine ranking.
Garnerone, Silvano; Zanardi, Paolo; Lidar, Daniel A
2012-06-08
We propose an adiabatic quantum algorithm for generating a quantum pure state encoding of the PageRank vector, the most widely used tool in ranking the relative importance of internet pages. We present extensive numerical simulations which provide evidence that this algorithm can prepare the quantum PageRank state in a time which, on average, scales polylogarithmically in the number of web pages. We argue that the main topological feature of the underlying web graph allowing for such a scaling is the out-degree distribution. The top-ranked log(n) entries of the quantum PageRank state can then be estimated with a polynomial quantum speed-up. Moreover, the quantum PageRank state can be used in "q-sampling" protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes designed for the same task. This can be used to decide whether to run a classical update of the PageRank.
Preliminary investigation study of code of developed country for developing Korean fuel cycle code
International Nuclear Information System (INIS)
Jeong, Chang Joon; Ko, Won Il; Lee, Ho Hee; Cho, Dong Keun; Park, Chang Je
2012-01-01
In order to develop Korean fuel cycle code, the analyses has been performed with the fuel cycle codes which are used in advanced country. Also, recommendations were proposed for future development. The fuel cycle codes are AS FLOOWS: VISTA which has been developed by IAEA, DANESS code which developed by ANL and LISTO, and VISION developed by INL for the Advanced Fuel Cycle Initiative (AFCI) system analysis. The recommended items were proposed for software, program scheme, material flow model, isotope decay model, environmental impact analysis model, and economics analysis model. The described things will be used for development of Korean nuclear fuel cycle code in future
Joint opportunistic scheduling and network coding for bidirectional relay channel
Shaqfeh, Mohammad
2013-07-01
In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users\\' transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited. © 2013 IEEE.
A simple smoothness indicator for the WENO scheme with adaptive order
Huang, Cong; Chen, Li Li
2018-01-01
The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.
MIMO transmit scheme based on morphological perceptron with competitive learning.
Valente, Raul Ambrozio; Abrão, Taufik
2016-08-01
This paper proposes a new multi-input multi-output (MIMO) transmit scheme aided by artificial neural network (ANN). The morphological perceptron with competitive learning (MP/CL) concept is deployed as a decision rule in the MIMO detection stage. The proposed MIMO transmission scheme is able to achieve double spectral efficiency; hence, in each time-slot the receiver decodes two symbols at a time instead one as Alamouti scheme. Other advantage of the proposed transmit scheme with MP/CL-aided detector is its polynomial complexity according to modulation order, while it becomes linear when the data stream length is greater than modulation order. The performance of the proposed scheme is compared to the traditional MIMO schemes, namely Alamouti scheme and maximum-likelihood MIMO (ML-MIMO) detector. Also, the proposed scheme is evaluated in a scenario with variable channel information along the frame. Numerical results have shown that the diversity gain under space-time coding Alamouti scheme is partially lost, which slightly reduces the bit-error rate (BER) performance of the proposed MP/CL-NN MIMO scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.
Joint Network Coding and Opportunistic Scheduling for the Bidirectional Relay Channel
Shaqfeh, Mohammad
2013-05-27
In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users’ transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited.
Joint Network Coding and Opportunistic Scheduling for the Bidirectional Relay Channel
Shaqfeh, Mohammad; Alnuweiri, Hussein; Alouini, Mohamed-Slim; Zafar, Ammar
2013-01-01
In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users’ transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited.
Glomerular latency coding in artificial olfaction.
Yamani, Jaber Al; Boussaid, Farid; Bermak, Amine; Martinez, Dominique
2011-01-01
Sensory perception results from the way sensory information is subsequently transformed in the brain. Olfaction is a typical example in which odor representations undergo considerable changes as they pass from olfactory receptor neurons (ORNs) to second-order neurons. First, many ORNs expressing the same receptor protein yet presenting heterogeneous dose-response properties converge onto individually identifiable glomeruli. Second, onset latency of glomerular activation is believed to play a role in encoding odor quality and quantity in the context of fast information processing. Taking inspiration from the olfactory pathway, we designed a simple yet robust glomerular latency coding scheme for processing gas sensor data. The proposed bio-inspired approach was evaluated using an in-house SnO(2) sensor array. Glomerular convergence was achieved by noting the possible analogy between receptor protein expressed in ORNs and metal catalyst used across the fabricated gas sensor array. Ion implantation was another technique used to account both for sensor heterogeneity and enhanced sensitivity. The response of the gas sensor array was mapped into glomerular latency patterns, whose rank order is concentration-invariant. Gas recognition was achieved by simply looking for a "match" within a library of spatio-temporal spike fingerprints. Because of its simplicity, this approach enables the integration of sensing and processing onto a single-chip.
International Nuclear Information System (INIS)
Bouard, Anne de; Debussche, Arnaud
2006-01-01
In this article we analyze the error of a semidiscrete scheme for the stochastic nonlinear Schrodinger equation with power nonlinearity. We consider supercritical or subcritical nonlinearity and the equation can be either focusing or defocusing. Allowing sufficient spatial regularity we prove that the numerical scheme has strong order 1/2 in general and order 1 if the noise is additive. Furthermore, we also prove that the weak order is always 1
ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs
Directory of Open Access Journals (Sweden)
Chien-Chia Chen
2011-07-01
Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.
Schematic limits of rank 4 Azumaya bundles are the locally-Witt algebras
International Nuclear Information System (INIS)
Venkata Balaji, T.E.
2002-07-01
It is shown that the schematic image of the scheme of Azumaya algebra structures on a vector bundle of rank 4 over any base scheme is separated, of finite type, smooth of relative dimension 13 and geometrically irreducible over that base and that this construction base-changes well. This generalises the main theorem of Part I of an earlier work and clarifies it by showing that the algebraic operation of forming the even Clifford algebra (=Witt algebra) of a rank 3 quadratic module essentially translates to performing the geometric operation of taking the schematic image of the scheme of Azumaya algebra structures. (author)
A stable higher order space time Galerkin marching-on-in-time scheme
Pray, Andrew J.
2013-07-01
We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order basis functions in time to improve the accuracy of the solver. The method is validated by showing convergence in temporal basis function order, time step size, and geometric discretization order. © 2013 IEEE.
Hamdi, Mazda; Kenari, Masoumeh Nasiri
2013-06-01
We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.
A fully distributed geo-routing scheme for wireless sensor networks
Bader, Ahmed
2013-12-01
When marrying randomized distributed space-time coding (RDSTC) to beaconless geo-routing, new performance horizons can be created. In order to reach those horizons, however, beaconless geo-routing protocols must evolve to operate in a fully distributed fashion. In this letter, we expose a technique to construct a fully distributed geo-routing scheme in conjunction with RDSTC. We then demonstrate the performance gains of this novel scheme by comparing it to one of the prominent classical schemes. © 2013 IEEE.
A fully distributed geo-routing scheme for wireless sensor networks
Bader, Ahmed; Abed-Meraim, Karim; Alouini, Mohamed-Slim
2013-01-01
When marrying randomized distributed space-time coding (RDSTC) to beaconless geo-routing, new performance horizons can be created. In order to reach those horizons, however, beaconless geo-routing protocols must evolve to operate in a fully distributed fashion. In this letter, we expose a technique to construct a fully distributed geo-routing scheme in conjunction with RDSTC. We then demonstrate the performance gains of this novel scheme by comparing it to one of the prominent classical schemes. © 2013 IEEE.
Chu, Chunlei; Stoffa, Paul L.; Seif, Roustam
2009-01-01
We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.
Ordered particles versus ordered pointers in the hybrid ordered plasma simulation (HOPS) code
International Nuclear Information System (INIS)
Anderson, D.V.; Shumaker, D.E.
1993-01-01
From a computational standpoint, particle simulation calculations for plasmas have not adapted well to the transitions from scalar to vector processing nor from serial to parallel environments. They have suffered from inordinate and excessive accessing of computer memory and have been hobbled by relatively inefficient gather-scatter constructs resulting from the use of indirect indexing. Lastly, the many-to-one mapping characteristic of the deposition phase has made it difficult to perform this in parallel. The authors' code sorts and reorders the particles in a spatial order. This allows them to greatly reduce the memory references, to run in directly indexed vector mode, and to employ domain decomposition to achieve parallelization. The field model solves pre-maxwell equations by interatively implicit methods. The OSOP (Ordered Storage Ordered Processing) version of HOPS keeps the particle tables ordered by rebuilding them after each particle pushing phase. Alternatively, the RSOP (Random Storage Ordered Processing) version keeps a table of pointers ordered by rebuilding them. Although OSOP is somewhat faster than RSOP in tests on vector-parallel machines, it is not clear this advantage will carry over to massively parallel computers
Generalized Sudan's List Decoding for Order Domain Codes
DEFF Research Database (Denmark)
Geil, Hans Olav; Matsumoto, Ryutaroh
2007-01-01
We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....
Higher-order schemes for the Laplace transformation method for parabolic problems
Douglas, C.
2011-01-01
In this paper we solve linear parabolic problems using the three stage noble algorithms. First, the time discretization is approximated using the Laplace transformation method, which is both parallel in time (and can be in space, too) and extremely high order convergent. Second, higher-order compact schemes of order four and six are used for the the spatial discretization. Finally, the discretized linear algebraic systems are solved using multigrid to show the actual convergence rate for numerical examples, which are compared to other numerical solution methods. © 2011 Springer-Verlag.
Practical Design of Delta-Sigma Multiple Description Audio Coding
DEFF Research Database (Denmark)
Leegaard, Jack Højholt; Østergaard, Jan; Jensen, Søren Holdt
2014-01-01
It was recently shown that delta-sigma quantization (DSQ) can be used for optimal multiple description (MD) coding of Gaussian sources. The DSQ scheme combined oversampling, prediction, and noise-shaping in order to trade off side distortion for central distortion in MD coding. It was shown that ...
Further Generalisations of Twisted Gabidulin Codes
DEFF Research Database (Denmark)
Puchinger, Sven; Rosenkilde, Johan Sebastian Heesemann; Sheekey, John
2017-01-01
We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes.......We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes....
Computing the Feng-Rao distances for codes from order domains
DEFF Research Database (Denmark)
Ruano Benito, Diego
2007-01-01
We compute the Feng–Rao distance of a code coming from an order domain with a simplicial value semigroup. The main tool is the Apéry set of a semigroup that can be computed using a Gröbner basis.......We compute the Feng–Rao distance of a code coming from an order domain with a simplicial value semigroup. The main tool is the Apéry set of a semigroup that can be computed using a Gröbner basis....
International Nuclear Information System (INIS)
Surya Mohan, P.; Tarvainen, Tanja; Schweiger, Martin; Pulkkinen, Aki; Arridge, Simon R.
2011-01-01
Highlights: → We developed a variable order global basis scheme to solve light transport in 3D. → Based on finite elements, the method can be applied to a wide class of geometries. → It is computationally cheap when compared to the fixed order scheme. → Comparisons with local basis method and other models demonstrate its accuracy. → Addresses problems encountered n modeling of light transport in human brain. - Abstract: We propose the P N approximation based on a finite element framework for solving the radiative transport equation with optical tomography as the primary application area. The key idea is to employ a variable order spherical harmonic expansion for angular discretization based on the proximity to the source and the local scattering coefficient. The proposed scheme is shown to be computationally efficient compared to employing homogeneously high orders of expansion everywhere in the domain. In addition the numerical method is shown to accurately describe the void regions encountered in the forward modeling of real-life specimens such as infant brains. The accuracy of the method is demonstrated over three model problems where the P N approximation is compared against Monte Carlo simulations and other state-of-the-art methods.
Caplan, R. M.
2013-04-01
We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time
A simulation of driven reconnection by a high precision MHD code
International Nuclear Information System (INIS)
Kusano, Kanya; Ouchi, Yasuo; Hayashi, Takaya; Horiuchi, Ritoku; Watanabe, Kunihiko; Sato, Tetsuya.
1988-01-01
A high precision MHD code, which has the fourth-order accuracy for both the spatial and time steps, is developed, and is applied to the simulation studies of two dimensional driven reconnection. It is confirm that the numerical dissipation of this new scheme is much less than that of two-step Lax-Wendroff scheme. The effect of the plasma compressibility on the reconnection dynamics is investigated by means of this high precision code. (author)
Boscheri, Walter; Dumbser, Michael; Loubère, Raphaël; Maire, Pierre-Henri
2018-04-01
In this paper we develop a conservative cell-centered Lagrangian finite volume scheme for the solution of the hydrodynamics equations on unstructured multidimensional grids. The method is derived from the Eucclhyd scheme discussed in [47,43,45]. It is second-order accurate in space and is combined with the a posteriori Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure robustness and stability at shock waves. Second-order of accuracy in time is achieved via the ADER (Arbitrary high order schemes using DERivatives) approach. A large set of numerical test cases is proposed to assess the ability of the method to achieve effective second order of accuracy on smooth flows, maintaining an essentially non-oscillatory behavior on discontinuous profiles, general robustness ensuring physical admissibility of the numerical solution, and precision where appropriate.
Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank
Directory of Open Access Journals (Sweden)
LI Lan-yin
2017-04-01
Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank，which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes，topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs，and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.
LOO: a low-order nonlinear transport scheme for acceleration of method of characteristics
International Nuclear Information System (INIS)
Li, Lulu; Smith, Kord; Forget, Benoit; Ferrer, Rodolfo
2015-01-01
This paper presents a new physics-based multi-grid nonlinear acceleration method: the low-order operator method, or LOO. LOO uses a coarse space-angle multi-group method of characteristics (MOC) neutron transport calculation to accelerate the fine space-angle MOC calculation. LOO is designed to capture more angular effects than diffusion-based acceleration methods through a transport-based low-order solver. LOO differs from existing transport-based acceleration schemes in that it emphasizes simplified coarse space-angle characteristics and preserves physics in quadrant phase-space. The details of the method, including the restriction step, the low-order iterative solver and the prolongation step are discussed in this work. LOO shows comparable convergence behavior to coarse mesh finite difference on several two-dimensional benchmark problems while not requiring any under-relaxation, making it a robust acceleration scheme. (author)
Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe
2017-12-01
A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks
Analysis and Improvement of the Generic Higher-Order Masking Scheme of FSE 2012
Roy, Arnab; Venkatesh, Srinivas Vivek
2013-01-01
Masking is a well-known technique used to prevent block cipher implementations from side-channel attacks. Higher-order side channel attacks (e.g. higher-order DPA attack) on widely used block cipher like AES have motivated the design of efficient higher-order masking schemes. Indeed, it is known that as the masking order increases, the difficulty of side-channel attack increases exponentially. However, the main problem in higher-order masking is to design an efficient and secure technique for...
Peng, Qiujin; Qiao, Zhonghua; Sun, Shuyu
2017-01-01
In this paper, we present two second-order numerical schemes to solve the fourth order parabolic equation derived from a diffuse interface model with Peng-Robinson Equation of state (EOS) for pure substance. The mass conservation, energy decay property, unique solvability and L-infinity convergence of these two schemes are proved. Numerical results demonstrate the good approximation of the fourth order equation and confirm reliability of these two schemes.
Peng, Qiujin
2017-09-18
In this paper, we present two second-order numerical schemes to solve the fourth order parabolic equation derived from a diffuse interface model with Peng-Robinson Equation of state (EOS) for pure substance. The mass conservation, energy decay property, unique solvability and L-infinity convergence of these two schemes are proved. Numerical results demonstrate the good approximation of the fourth order equation and confirm reliability of these two schemes.
NLL order contributions for exclusive processes in jet-calculus scheme
International Nuclear Information System (INIS)
Tanaka, Hidekazu
2011-01-01
We investigate the next-to-leading logarithmic (NLL) order contributions of the quantum chromodynamics (QCD) for exclusive processes evaluated by Monte Carlo methods. Ambiguities of the Monte Carlo calculation based on the leading-logarithmic (LL) order approximations are pointed out. To remove these ambiguities, we take into account the NLL order terms. In a model presented in this paper, interference contributions due to the NLL order terms are included for the generation of the transverse momenta in initial-state parton radiations. Furthermore, a kinematical constraint due to parton radiation, which is also a part of the NLL order contributions, is taken into account. This method guarantees a proper phase space boundary for hard scattering cross sections as well as parton radiations. As an example, cross sections for lepton pair productions mediated by a virtual photon in hadron-hadron collisions are calculated, using the jet-calculus scheme for flavor nonsinglet quarks. (author)
Concurrent Codes: A Holographic-Type Encoding Robust against Noise and Loss.
Directory of Open Access Journals (Sweden)
David M Benton
Full Text Available Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.
Limits of Rank 4 Azumaya Algebras and Applications to ...
Indian Academy of Sciences (India)
It is shown that the schematic image of the scheme of Azumaya algebra structures on a vector bundle of rank 4 over any base scheme is separated, of finite type, smooth of relative dimension 13 and geometrically irreducible over that base and that this construction base-changes well. This fully generalizes Seshadri's ...
A stable higher order space time Galerkin marching-on-in-time scheme
Pray, Andrew J.; Shanker, Balasubramaniam; Bagci, Hakan
2013-01-01
We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad
2017-07-01
This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.
24 CFR 599.401 - Ranking of applications.
2010-04-01
... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Ranking of applications. 599.401... Communities § 599.401 Ranking of applications. (a) Ranking order. Rural and urban applications will be ranked... applications ranked first. (b) Separate ranking categories. After initial ranking, both rural and urban...
Ranking and Mapping the Contributions by Overseas Chinese Strategy Scholars
DEFF Research Database (Denmark)
Li, Weiwen; Li, Peter Ping; Shu, Cheng
2015-01-01
The authors comment on an article by H. Jiao and colleagues regarding development of a ranking of overseas Chines strategy scholars in terms of their contributions to the strategy research. Topics include selection of 24 business journals ranked by the University of Texas at Dallas for their rese......The authors comment on an article by H. Jiao and colleagues regarding development of a ranking of overseas Chines strategy scholars in terms of their contributions to the strategy research. Topics include selection of 24 business journals ranked by the University of Texas at Dallas...... for their research; identifying authors who had published articles in periodicals such as "Management and Organization Review;" and development of a coding protocol and discussing coding procedure.....
Ranking Specific Sets of Objects.
Maly, Jan; Woltran, Stefan
2017-01-01
Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.
Secure-Network-Coding-Based File Sharing via Device-to-Device Communication
Directory of Open Access Journals (Sweden)
Lei Wang
2017-01-01
Full Text Available In order to increase the efficiency and security of file sharing in the next-generation networks, this paper proposes a large scale file sharing scheme based on secure network coding via device-to-device (D2D communication. In our scheme, when a user needs to share data with others in the same area, the source node and all the intermediate nodes need to perform secure network coding operation before forwarding the received data. This process continues until all the mobile devices in the networks successfully recover the original file. The experimental results show that secure network coding is very feasible and suitable for such file sharing. Moreover, the sharing efficiency and security outperform traditional replication-based sharing scheme.
Explicit TE/TM scheme for particle beam simulations
International Nuclear Information System (INIS)
Dohlus, M.; Zagorodnov, I.
2008-10-01
In this paper we propose an explicit two-level conservative scheme based on a TE/TM like splitting of the field components in time. Its dispersion properties are adjusted to accelerator problems. It is simpler and faster than the implicit version. It does not have dispersion in the longitudinal direction and the dispersion properties in the transversal plane are improved. The explicit character of the new scheme allows a uniformly stable conformal method without iterations and the scheme can be parallelized easily. It assures energy and charge conservation. A version of this explicit scheme for rotationally symmetric structures is free from the progressive time step reducing for higher order azimuthal modes as it takes place for Yee's explicit method used in the most popular electrodynamics codes. (orig.)
Selecting registration schemes in case of interstitial lung disease follow-up in CT
International Nuclear Information System (INIS)
Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra; Kalogeropoulou, Christina; Pratikakis, Ioannis; Costaridou, Lena
2015-01-01
Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the
Selecting registration schemes in case of interstitial lung disease follow-up in CT
Energy Technology Data Exchange (ETDEWEB)
Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra [Department of Medical Physics, School of Medicine,University of Patras, Patras 26504 (Greece); Kalogeropoulou, Christina [Department of Radiology, School of Medicine, University of Patras, Patras 26504 (Greece); Pratikakis, Ioannis [Department of Electrical and Computer Engineering, Democritus University of Thrace, Xanthi 67100 (Greece); Costaridou, Lena, E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece)
2015-08-15
Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the
Code cases for implementing risk-based inservice testing in the ASME OM code
International Nuclear Information System (INIS)
Rowley, C.W.
1996-01-01
Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices
Code cases for implementing risk-based inservice testing in the ASME OM code
Energy Technology Data Exchange (ETDEWEB)
Rowley, C.W.
1996-12-01
Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.
Bootstrap Determination of the Co-Integration Rank in Heteroskedastic VAR Models
DEFF Research Database (Denmark)
Cavaliere, G.; Rahbek, Anders; Taylor, A.M.R.
2014-01-01
In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio (PLR) co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying vector autoregressive (VAR) model which obtain under the reduced rank null hypothesis. They propose methods based on an independent and individual distributed (i.i.d.) bootstrap resampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co......-integrated VAR model with i.i.d. innovations. In this paper we investigate the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap resampling scheme, when time-varying behavior is present in either the conditional or unconditional variance of the innovations. We...
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
Does Kaniso activate CASINO?: input coding schemes and phonology in visual-word recognition.
Acha, Joana; Perea, Manuel
2010-01-01
Most recent input coding schemes in visual-word recognition assume that letter position coding is orthographic rather than phonological in nature (e.g., SOLAR, open-bigram, SERIOL, and overlap). This assumption has been drawn - in part - by the fact that the transposed-letter effect (e.g., caniso activates CASINO) seems to be (mostly) insensitive to phonological manipulations (e.g., Perea & Carreiras, 2006, 2008; Perea & Pérez, 2009). However, one could argue that the lack of a phonological effect in prior research was due to the fact that the manipulation always occurred in internal letter positions - note that phonological effects tend to be stronger for the initial syllable (Carreiras, Ferrand, Grainger, & Perea, 2005). To reexamine this issue, we conducted a masked priming lexical decision experiment in which we compared the priming effect for transposed-letter pairs (e.g., caniso-CASINO vs. caviro-CASINO) and for pseudohomophone transposed-letter pairs (kaniso-CASINO vs. kaviro-CASINO). Results showed a transposed-letter priming effect for the correctly spelled pairs, but not for the pseudohomophone pairs. This is consistent with the view that letter position coding is (primarily) orthographic in nature.
Ranking species in mutualistic networks
Domínguez-García, Virginia; Muñoz, Miguel A.
2015-02-01
Understanding the architectural subtleties of ecological networks, believed to confer them enhanced stability and robustness, is a subject of outmost relevance. Mutualistic interactions have been profusely studied and their corresponding bipartite networks, such as plant-pollinator networks, have been reported to exhibit a characteristic ``nested'' structure. Assessing the importance of any given species in mutualistic networks is a key task when evaluating extinction risks and possible cascade effects. Inspired in a recently introduced algorithm -similar in spirit to Google's PageRank but with a built-in non-linearity- here we propose a method which -by exploiting their nested architecture- allows us to derive a sound ranking of species importance in mutualistic networks. This method clearly outperforms other existing ranking schemes and can become very useful for ecosystem management and biodiversity preservation, where decisions on what aspects of ecosystems to explicitly protect need to be made.
Multiple LDPC decoding for distributed source coding and video coding
DEFF Research Database (Denmark)
Forchhammer, Søren; Luong, Huynh Van; Huang, Xin
2011-01-01
Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...
Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding
DEFF Research Database (Denmark)
Hansen, Johan Peder
We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.
Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland
2011-04-08
In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.
Polar Coding for the Large Hadron Collider: Challenges in Code Concatenation
AUTHOR|(CDS)2238544; Podzorny, Tomasz; Uythoven, Jan
2018-01-01
In this work, we present a concatenated repetition-polar coding scheme that is aimed at applications requiring highly unbalanced unequal bit-error protection, such as the Beam Interlock System of the Large Hadron Collider at CERN. Even though this concatenation scheme is simple, it reveals signiﬁcant challenges that may be encountered when designing a concatenated scheme that uses a polar code as an inner code, such as error correlation and unusual decision log-likelihood ratio distributions. We explain and analyze these challenges and we propose two ways to overcome them.
Second-order splitting schemes for a class of reactive systems
International Nuclear Information System (INIS)
Ren Zhuyin; Pope, Stephen B.
2008-01-01
We consider the numerical time integration of a class of reaction-transport systems that are described by a set of ordinary differential equations for primary variables. In the governing equations, the terms involved may require the knowledge of secondary variables, which are functions of the primary variables. Specifically, we consider the case where, given the primary variables, the evaluation of the secondary variables is computationally expensive. To solve this class of reaction-transport equations, we develop and demonstrate several computationally efficient splitting schemes, wherein the portions of the governing equations containing chemical reaction terms are separated from those parts containing the transport terms. A computationally efficient solution to the transport sub-step is achieved through the use of linearization or predictor-corrector methods. The splitting schemes are applied to the reactive flow in a continuously stirred tank reactor (CSTR) with the Davis-Skodjie reaction model, to the CO+H 2 oxidation in a CSTR with detailed chemical kinetics, and to a reaction-diffusion system with an extension of the Oregonator model of the Belousov-Zhabotinsky reaction. As demonstrated in the test problems, the proposed splitting schemes, which yield efficient solutions to the transport sub-step, achieve second-order accuracy in time
Energy Technology Data Exchange (ETDEWEB)
Muramatsu, Toshiharu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
1998-08-01
This report explains the numerical methods and the set-up method of input data for a single-phase multi-dimensional thermohydraulics direct numerical simulation code DINUS-3 (Direct Numerical Simulation using a 3rd-order upwind scheme). The code was developed to simulate non-stationary temperature fluctuation phenomena related to thermal striping phenomena, developed at Power Reactor and Nuclear Fuel Development Corporation (PNC). The DINUS-3 code was characterized by the use of a third-order upwind scheme for convection terms in instantaneous Navier-Stokes and energy equations, and an adaptive control system based on the Fuzzy theory to control time step sizes. Author expect this report is very useful to utilize the DINUS-3 code for the evaluation of various non-stationary thermohydraulic phenomena in reactor applications. (author)
Seenivasagam, V; Velumani, R
2013-01-01
Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.
A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud
Directory of Open Access Journals (Sweden)
V. Seenivasagam
2013-01-01
Full Text Available Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT—Singular Value Decomposition (SVD domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu’s invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.
Coding for Two Dimensional Constrained Fields
DEFF Research Database (Denmark)
Laursen, Torben Vaarbye
2006-01-01
a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....
Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the
Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the
Directory of Open Access Journals (Sweden)
Xin Tang
Full Text Available Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC. Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our
Linear Subspace Ranking Hashing for Cross-Modal Retrieval.
Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A
2017-09-01
Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.
A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion
Huynh, H. T.
2009-01-01
We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.
Coding Transparency in Object-Based Video
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren
2006-01-01
A novel algorithm for coding gray level alpha planes in object-based video is presented. The scheme is based on segmentation in multiple layers. Different coders are specifically designed for each layer. In order to reduce the bit rate, cross-layer redundancies as well as temporal correlation are...
Phenomena identification and ranking tables (PIRT) for LBLOCA
International Nuclear Information System (INIS)
Shaw, R.A.; Dimenna, R.A.; Larson, T.K.; Wilson, G.E.
1988-01-01
The US Nuclear Regulatory Commission is sponsoring a program to provide validated reactor safety computer codes with quantified uncertainties. The intent is to quantify the accuracy of the codes for use in best estimate licensing applications. One of the tasks required to complete this program involves the identification and ranking of thermal-hydraulic phenomena that occur during particular accidents. This paper provides detailed tables of phenomena and importance ranks for a PWR LBLOCA. The phenomena were identified and ranked according to perceived impact on peak cladding temperature. Two approaches were used to complete this task. First, a panel of experts identified the physical processes considered to be most important during LBLOCA. A second team of experienced analysts then, in parallel, assembled complete tables of all plausible LBLOCA phenomena, regardless of perceived importance. Each phenomenon was then ranked in importance against every other phenomenon associated with a given component. The results were placed in matrix format and solved for the principal eigenvector. The results as determined by each method are presented in this report
Bootstrap Determination of the Co-integration Rank in Heteroskedastic VAR Models
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A.M.Robert
In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio [PLR] co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying VAR model which obtain under the reduced rank null hypothesis. They propose methods based on an i.i.d. bootstrap re-sampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co-integrated VAR model with i.i.d. innovations. In this paper we investigate...... the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap re-sampling scheme, when time-varying behaviour is present in either the conditional or unconditional variance of the innovations. We show that the bootstrap PLR tests are asymptotically correctly sized and...
Bootstrap Determination of the Co-Integration Rank in Heteroskedastic VAR Models
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A. M. Robert
In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio [PLR] co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying VAR model which obtain under the reduced rank null hypothesis. They propose methods based on an i.i.d. bootstrap re-sampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co-integrated VAR model with i.i.d. innovations. In this paper we investigate...... the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap re-sampling scheme, when time-varying behaviour is present in either the conditional or unconditional variance of the innovations. We show that the bootstrap PLR tests are asymptotically correctly sized and...
Sailaukhanuly, Yerbolat; Zhakupbekova, Arai; Amutova, Farida; Carlsen, Lars
2013-01-01
Knowledge of the environmental behavior of chemicals is a fundamental part of the risk assessment process. The present paper discusses various methods of ranking of a series of persistent organic pollutants (POPs) according to the persistence, bioaccumulation and toxicity (PBT) characteristics. Traditionally ranking has been done as an absolute (total) ranking applying various multicriteria data analysis methods like simple additive ranking (SAR) or various utility functions (UFs) based rankings. An attractive alternative to these ranking methodologies appears to be partial order ranking (POR). The present paper compares different ranking methods like SAR, UF and POR. Significant discrepancies between the rankings are noted and it is concluded that partial order ranking, as a method without any pre-assumptions concerning possible relation between the single parameters, appears as the most attractive ranking methodology. In addition to the initial ranking partial order methodology offers a wide variety of analytical tools to elucidate the interplay between the objects to be ranked and the ranking parameters. In the present study is included an analysis of the relative importance of the single P, B and T parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of
Secure-Network-Coding-Based File Sharing via Device-to-Device Communication
Wang, Lei; Wang, Qing
2017-01-01
In order to increase the efficiency and security of file sharing in the next-generation networks, this paper proposes a large scale file sharing scheme based on secure network coding via device-to-device (D2D) communication. In our scheme, when a user needs to share data with others in the same area, the source node and all the intermediate nodes need to perform secure network coding operation before forwarding the received data. This process continues until all the mobile devices in the netw...
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme
Directory of Open Access Journals (Sweden)
Winch Peter J
2011-04-01
Full Text Available Abstract Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large, location (urban/rural, and type (public/private. Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1 Discharge Summarization, 2 Completeness Checking, 3 Diagnosis and Procedure Coding, 4 Code Checking, 5 Relative Weight Challenging, 6 Coding Report, and 7 Internal Audit. The hospital coding practice can be affected by at least five main factors: 1 Internal Dynamics, 2 Management Context, 3 Financial Dependency, 4 Resource and Capacity, and 5 External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.
Who's bigger? where historical figures really rank
Skiena, Steven
2014-01-01
Is Hitler bigger than Napoleon? Washington bigger than Lincoln? Picasso bigger than Einstein? Quantitative analysts are rapidly finding homes in social and cultural domains, from finance to politics. What about history? In this fascinating book, Steve Skiena and Charles Ward bring quantitative analysis to bear on ranking and comparing historical reputations. They evaluate each person by aggregating the traces of millions of opinions, just as Google ranks webpages. The book includes a technical discussion for readers interested in the details of the methods, but no mathematical or computational background is necessary to understand the rankings or conclusions. Along the way, the authors present the rankings of more than one thousand of history's most significant people in science, politics, entertainment, and all areas of human endeavor. Anyone interested in history or biography can see where their favorite figures place in the grand scheme of things.
DEFF Research Database (Denmark)
Olesen, Frede; Vedsted, Peter; Nielsen, Jørgen Nørskov
1996-01-01
OBJECTIVE: To demonstrate whether standardization of practice populations by age and sex changes the internal prescription ranking order of a group of practices. DESIGN: Data on the prescribing of cardiovascular drugs in a group of practices were obtained from a county-based database. Information...... on the age, sex, and numbers of patients per practice was also obtained. The direct standardization method was used to adjust practice populations for age and sex. SETTING: The town of Randers, Aarhus County, Denmark. SUBJECTS: 35 practices, 41 GPs. MAIN OUTCOME MEASURES: Ranking of the 35 practices...... of the practices. Only four practices did not change ranking position, while four moved more than ten places. The slope between highest and lowest ranked practice did not diminish after standardization. CONCLUSION: Care should be taken when comparing peer prescribing patterns from crude utilization data, and we...
Code portability and data management considerations in the SAS3D LMFBR accident-analysis code
International Nuclear Information System (INIS)
Dunn, F.E.
1981-01-01
The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
FISH: A THREE-DIMENSIONAL PARALLEL MAGNETOHYDRODYNAMICS CODE FOR ASTROPHYSICAL APPLICATIONS
International Nuclear Information System (INIS)
Kaeppeli, R.; Whitehouse, S. C.; Scheidegger, S.; Liebendoerfer, M.; Pen, U.-L.
2011-01-01
FISH is a fast and simple ideal magnetohydrodynamics code that scales to ∼10,000 processes for a Cartesian computational domain of ∼1000 3 cells. The simplicity of FISH has been achieved by the rigorous application of the operator splitting technique, while second-order accuracy is maintained by the symmetric ordering of the operators. Between directional sweeps, the three-dimensional data are rotated in memory so that the sweep is always performed in a cache-efficient way along the direction of contiguous memory. Hence, the code only requires a one-dimensional description of the conservation equations to be solved. This approach also enables an elegant novel parallelization of the code that is based on persistent communications with MPI for cubic domain decomposition on machines with distributed memory. This scheme is then combined with an additional OpenMP parallelization of different sweeps that can take advantage of clusters of shared memory. We document the detailed implementation of a second-order total variation diminishing advection scheme based on flux reconstruction. The magnetic fields are evolved by a constrained transport scheme. We show that the subtraction of a simple estimate of the hydrostatic gradient from the total gradients can significantly reduce the dissipation of the advection scheme in simulations of gravitationally bound hydrostatic objects. Through its simplicity and efficiency, FISH is as well suited for hydrodynamics classes as for large-scale astrophysical simulations on high-performance computer clusters. In preparation for the release of a public version, we demonstrate the performance of FISH in a suite of astrophysically orientated test cases.
D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato
2018-01-01
Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-07-09
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.
Block models and personalized PageRank.
Kloumann, Isabel M; Ugander, Johan; Kleinberg, Jon
2017-01-03
Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods through the "seed set expansion problem": given a subset [Formula: see text] of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work, we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter [Formula: see text] that depends on the block model parameters. This connection provides a formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance, despite being simple linear classification rules, and are even competitive with belief propagation.
High Order Modulation Protograph Codes
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.
Directory of Open Access Journals (Sweden)
Vipin Balyan
2014-08-01
Full Text Available Orthogonal variable spreading factor codes are used in the downlink to maintain the orthogonality between different channels and are used to handle new calls arriving in the system. A period of operation leads to fragmentation of vacant codes. This leads to code blocking problem. The assignment scheme proposed in this paper is not affected by fragmentation, as the fragmentation is generated by the scheme itself. In this scheme, the code tree is divided into groups whose capacity is fixed and numbers of members (codes are variable. A group with maximum number of busy members is used for assignment, this leads to fragmentation of busy groups around code tree and compactness within group. The proposed scheme is well evaluated and compared with other schemes using parameters like code blocking probability and call establishment delay. Through simulations it has been demonstrated that the proposed scheme not only adequately reduces code blocking probability, but also requires significantly less time before assignment to locate a vacant code for assignment, which makes it suitable for the real-time calls.
Javan, Nastooh Taheri; Sabaei, Masoud; Dehghan, Mehdi
2018-01-01
Any properly designed network coding technique can result in increased throughput and reliability of multi-hop wireless networks by taking advantage of the broadcast nature of wireless medium. In many inter-flow network coding schemes nodes are encouraged to overhear neighbours traffic in order to improve coding opportunities at the transmitter nodes. A study of these schemes reveal that some of the overheard packets are not useful for coding operation and thus this forced overhearing increas...
Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions
Directory of Open Access Journals (Sweden)
Burr Alister
2009-01-01
Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.
On fuzzy semantic similarity measure for DNA coding.
Ahmad, Muneer; Jung, Low Tang; Bhuiyan, Md Al-Amin
2016-02-01
A coding measure scheme numerically translates the DNA sequence to a time domain signal for protein coding regions identification. A number of coding measure schemes based on numerology, geometry, fixed mapping, statistical characteristics and chemical attributes of nucleotides have been proposed in recent decades. Such coding measure schemes lack the biologically meaningful aspects of nucleotide data and hence do not significantly discriminate coding regions from non-coding regions. This paper presents a novel fuzzy semantic similarity measure (FSSM) coding scheme centering on FSSM codons׳ clustering and genetic code context of nucleotides. Certain natural characteristics of nucleotides i.e. appearance as a unique combination of triplets, preserving special structure and occurrence, and ability to own and share density distributions in codons have been exploited in FSSM. The nucleotides׳ fuzzy behaviors, semantic similarities and defuzzification based on the center of gravity of nucleotides revealed a strong correlation between nucleotides in codons. The proposed FSSM coding scheme attains a significant enhancement in coding regions identification i.e. 36-133% as compared to other existing coding measure schemes tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
M. A. Zavodchikov
2012-01-01
Full Text Available In this paper we consider Giseker-Maruyama moduli scheme M := MP3(2;¡1; 2; 0 of stable coherent torsion free sheaves of rank 2 with Chern classes c1 = -1, c2 = 2, c3 = 0 on 3-dimensional projective space P3. We will de¯ne two sets of sheaves M1 and M2 in M and we will prove that closures of M1 and M2 in M are irreducible components of dimensions 15 and 19, accordingly.
Zaghi, S.
2014-07-01
OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git
Class of unconditionally stable second-order implicit schemes for hyperbolic and parabolic equations
International Nuclear Information System (INIS)
Lui, H.C.
The linearized Burgers equation is considered as a model u/sub t/ tau/sub x/ = bu/sub xx/, where the subscripts t and x denote the derivatives of the function u with respect to time t and space x; a and b are constants (b greater than or equal to 0). Numerical schemes for solving the equation are described that are second-order accurate, unconditionally stable, and dissipative of higher order. (U.S.)
Zhuang, Xiahai; Bai, Wenjia; Song, Jingjing; Zhan, Songhua; Qian, Xiaohua; Shi, Wenzhe; Lian, Yanyun; Rueckert, Daniel
2015-07-01
Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors' proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation
International Nuclear Information System (INIS)
Abgrall, Remi; Mezine, Mohamed
2003-01-01
The aim of this paper is to construct upwind residual distribution schemes for the time accurate solution of hyperbolic conservation laws. To do so, we evaluate a space-time fluctuation based on a space-time approximation of the solution and develop new residual distribution schemes which are extensions of classical steady upwind residual distribution schemes. This method has been applied to the solution of scalar advection equation and to the solution of the compressible Euler equations both in two space dimensions. The first version of the scheme is shown to be, at least in its first order version, unconditionally energy stable and possibly conditionally monotonicity preserving. Using an idea of Csik et al. [Space-time residual distribution schemes for hyperbolic conservation laws, 15th AIAA Computational Fluid Dynamics Conference, Anahein, CA, USA, AIAA 2001-2617, June 2001], we modify the formulation to end up with a scheme that is unconditionally energy stable and unconditionally monotonicity preserving. Several numerical examples are shown to demonstrate the stability and accuracy of the method
Capacity-achieving CPM schemes
Perotti, Alberto; Tarable, Alberto; Benedetto, Sergio; Montorsi, Guido
2008-01-01
The pragmatic approach to coded continuous-phase modulation (CPM) is proposed as a capacity-achieving low-complexity alternative to the serially-concatenated CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes. Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM modulations and optimize it through a careful design of the mapping between input bits and CPM waveforms. The s...
Balsara, Dinshaw S.; Garain, Sudip; Taflove, Allen; Montecinos, Gino
2018-02-01
The Finite Difference Time Domain (FDTD) scheme has served the computational electrodynamics community very well and part of its success stems from its ability to satisfy the constraints in Maxwell's equations. Even so, in the previous paper of this series we were able to present a second order accurate Godunov scheme for computational electrodynamics (CED) which satisfied all the same constraints and simultaneously retained all the traditional advantages of Godunov schemes. In this paper we extend the Finite Volume Time Domain (FVTD) schemes for CED in material media to better than second order of accuracy. From the FDTD method, we retain a somewhat modified staggering strategy of primal variables which enables a very beneficial constraint-preservation for the electric displacement and magnetic induction vector fields. This is accomplished with constraint-preserving reconstruction methods which are extended in this paper to third and fourth orders of accuracy. The idea of one-dimensional upwinding from Godunov schemes has to be significantly modified to use the multidimensionally upwinded Riemann solvers developed by the first author. In this paper, we show how they can be used within the context of a higher order scheme for CED. We also report on advances in timestepping. We show how Runge-Kutta IMEX schemes can be adapted to CED even in the presence of stiff source terms brought on by large conductivities as well as strong spatial variations in permittivity and permeability. We also formulate very efficient ADER timestepping strategies to endow our method with sub-cell resolving capabilities. As a result, our method can be stiffly-stable and resolve significant sub-cell variation in the material properties within a zone. Moreover, we present ADER schemes that are applicable to all hyperbolic PDEs with stiff source terms and at all orders of accuracy. Our new ADER formulation offers a treatment of stiff source terms that is much more efficient than previous ADER
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-01-01
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616
Low rank magnetic resonance fingerprinting.
Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C
2016-08-01
Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.
QC-LDPC code-based cryptography
Baldi, Marco
2014-01-01
This book describes the fundamentals of cryptographic primitives based on quasi-cyclic low-density parity-check (QC-LDPC) codes, with a special focus on the use of these codes in public-key cryptosystems derived from the McEliece and Niederreiter schemes. In the first part of the book, the main characteristics of QC-LDPC codes are reviewed, and several techniques for their design are presented, while tools for assessing the error correction performance of these codes are also described. Some families of QC-LDPC codes that are best suited for use in cryptography are also presented. The second part of the book focuses on the McEliece and Niederreiter cryptosystems, both in their original forms and in some subsequent variants. The applicability of QC-LDPC codes in these frameworks is investigated by means of theoretical analyses and numerical tools, in order to assess their benefits and drawbacks in terms of system efficiency and security. Several examples of QC-LDPC code-based public key cryptosystems are prese...
He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin
2017-07-01
In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).
Additive operator-difference schemes splitting schemes
Vabishchevich, Petr N
2013-01-01
Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy
Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions
Directory of Open Access Journals (Sweden)
Lei Ye
2009-01-01
Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.
Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman
2017-07-01
This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.
Directory of Open Access Journals (Sweden)
Shahid Hasnain
2017-07-01
Full Text Available This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.
Development of a high-order finite volume method with multiblock partition techniques
Directory of Open Access Journals (Sweden)
E. M. Lemos
2012-03-01
Full Text Available This work deals with a new numerical methodology to solve the Navier-Stokes equations based on a finite volume method applied to structured meshes with co-located grids. High-order schemes used to approximate advective, diffusive and non-linear terms, connected with multiblock partition techniques, are the main contributions of this paper. Combination of these two techniques resulted in a computer code that involves high accuracy due the high-order schemes and great flexibility to generate locally refined meshes based on the multiblock approach. This computer code has been able to obtain results with higher or equal accuracy in comparison with results obtained using classical procedures, with considerably less computational effort.
Collaborative multi-layer network coding for cellular cognitive radio networks
Sorour, Sameh
2013-06-01
In this paper, we propose a prioritized multi-layer network coding scheme for collaborative packet recovery in underlay cellular cognitive radio networks. This scheme allows the collocated primary and cognitive radio base-stations to collaborate with each other, in order to minimize their own and each other\\'s packet recovery overheads, and thus improve their throughput, without any coordination between them. This non-coordinated collaboration is done using a novel multi-layer instantly decodable network coding scheme, which guarantees that each network\\'s help to the other network does not result in any degradation in its own performance. It also does not cause any violation to the primary networks interference thresholds in the same and adjacent cells. Yet, our proposed scheme both guarantees the reduction of the recovery overhead in collocated primary and cognitive radio networks, and allows early recovery of their packets compared to non-collaborative schemes. Simulation results show that a recovery overhead reduction of 15% and 40% can be achieved by our proposed scheme in the primary and cognitive radio networks, respectively, compared to the corresponding non-collaborative scheme. © 2013 IEEE.
Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging.
Directory of Open Access Journals (Sweden)
Xingjian Yu
Full Text Available In dynamic Positron Emission Tomography (PET, an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets.
International Nuclear Information System (INIS)
Wang Haifeng; Popov, Pavel P.; Pope, Stephen B.
2010-01-01
We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.
Trinary signed-digit arithmetic using an efficient encoding scheme
Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.
2000-09-01
The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
O2-GIDNC: Beyond instantly decodable network coding
Aboutorab, Neda
2013-06-01
In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC scheme, receivers can store non-instantly decodable packets (NIDPs) comprising two of their missing packets, and use them in a systematic way for later decodings. Once this graph representation is found, it can be used to extend the GIDNC graph-based analyses to the proposed O2-GIDNC scheme with a limited increase in complexity. In the proposed O2-GIDNC scheme, the information of the stored NIDPs at the receivers and the decoding opportunities they create can be exploited to improve the broadcast completion time and decoding delay compared to traditional GIDNC scheme. The completion time and decoding delay minimizing algorithms that can operate on the new O2-GIDNC graph are further described. The simulation results show that our proposed O2-GIDNC improves the completion time and decoding delay performance of the traditional GIDNC. © 2013 IEEE.
DEFF Research Database (Denmark)
Fasano, Andrea; Rasmussen, Henrik K.
2017-01-01
A third order accurate, in time and space, finite element scheme for the numerical simulation of three- dimensional time-dependent flow of the molecular stress function type of fluids in a generalized formu- lation is presented. The scheme is an extension of the K-BKZ Lagrangian finite element me...
Fragment separator momentum compression schemes
Energy Technology Data Exchange (ETDEWEB)
Bandura, Laura, E-mail: bandura@anl.gov [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Erdelyi, Bela [Argonne National Laboratory, Argonne, IL 60439 (United States); Northern Illinois University, DeKalb, IL 60115 (United States); Hausmann, Marc [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Kubo, Toshiyuki [RIKEN Nishina Center, RIKEN, Wako (Japan); Nolen, Jerry [Argonne National Laboratory, Argonne, IL 60439 (United States); Portillo, Mauricio [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Sherrill, Bradley M. [National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States)
2011-07-21
We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.
Fragment separator momentum compression schemes
International Nuclear Information System (INIS)
Bandura, Laura; Erdelyi, Bela; Hausmann, Marc; Kubo, Toshiyuki; Nolen, Jerry; Portillo, Mauricio; Sherrill, Bradley M.
2011-01-01
We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.
Bilayer expurgated LDPC codes with uncoded relaying
Directory of Open Access Journals (Sweden)
Md. Noor-A-Rahim
2017-08-01
Full Text Available Bilayer low-density parity-check (LDPC codes are an effective coding technique for decode-and-forward relaying, where the relay forwards extra parity bits to help the destination to decode the source bits correctly. In the existing bilayer coding scheme, these parity bits are protected by an error correcting code and assumed reliably available at the receiver. We propose an uncoded relaying scheme, where the extra parity bits are forwarded to the destination without any protection. Through density evolution analysis and simulation results, we show that our proposed scheme achieves better performance in terms of bit erasure probability than the existing relaying scheme. In addition, our proposed scheme results in lower complexity at the relay.
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
MPDATA: Third-order accuracy for variable flows
Waruszewski, Maciej; Kühnlein, Christian; Pawlowska, Hanna; Smolarkiewicz, Piotr K.
2018-04-01
This paper extends the multidimensional positive definite advection transport algorithm (MPDATA) to third-order accuracy for temporally and spatially varying flows. This is accomplished by identifying the leading truncation error of the standard second-order MPDATA, performing the Cauchy-Kowalevski procedure to express it in a spatial form and compensating its discrete representation-much in the same way as the standard MPDATA corrects the first-order accurate upwind scheme. The procedure of deriving the spatial form of the truncation error was automated using a computer algebra system. This enables various options in MPDATA to be included straightforwardly in the third-order scheme, thereby minimising the implementation effort in existing code bases. Following the spirit of MPDATA, the error is compensated using the upwind scheme resulting in a sign-preserving algorithm, and the entire scheme can be formulated using only two upwind passes. Established MPDATA enhancements, such as formulation in generalised curvilinear coordinates, the nonoscillatory option or the infinite-gauge variant, carry over to the fully third-order accurate scheme. A manufactured 3D analytic solution is used to verify the theoretical development and its numerical implementation, whereas global tracer-transport benchmarks demonstrate benefits for chemistry-transport models fundamental to air quality monitoring, forecasting and control. A series of explicitly-inviscid implicit large-eddy simulations of a convective boundary layer and explicitly-viscid simulations of a double shear layer illustrate advantages of the fully third-order-accurate MPDATA for fluid dynamics applications.
Code Generation for a Simple First-Order Prover
DEFF Research Database (Denmark)
Villadsen, Jørgen; Schlichtkrull, Anders; Halkjær From, Andreas
2016-01-01
We present Standard ML code generation in Isabelle/HOL of a sound and complete prover for first-order logic, taking formalizations by Tom Ridge and others as the starting point. We also define a set of so-called unfolding rules and show how to use these as a simple prover, with the aim of using t...... the approach for teaching logic and verification to computer science students at the bachelor level....
Implementation of an approximate zero-variance scheme in the TRIPOLI Monte Carlo code
Energy Technology Data Exchange (ETDEWEB)
Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Dumonteil, E.; Petit, O.; Diop, C. [Commissariat a l' Energie Atomique CEA, Gif-sur-Yvette (France)
2006-07-01
In an accompanying paper it is shown that theoretically a zero-variance Monte Carlo scheme can be devised for criticality calculations if the space, energy and direction dependent adjoint function is exactly known. This requires biasing of the transition and collision kernels with the appropriate adjoint function. In this paper it is discussed how an existing general purpose Monte Carlo code like TRIPOLI can be modified to approach the zero-variance scheme. This requires modifications for reading in the adjoint function obtained from a separate deterministic calculation for a number of space intervals, energy groups and discrete directions. Furthermore, a function has to be added to supply the direction dependent and the averaged adjoint function at a specific position in the system by interpolation. The initial particle weights of a certain batch must be set inversely proportional to the averaged adjoint function and proper normalization of the initial weights must be secured. The sampling of the biased transition kernel requires cumulative integrals of the biased kernel along the flight path until a certain value, depending on a selected random number is reached to determine a new collision site. The weight of the particle must be adapted accordingly. The sampling of the biased collision kernel (in a multigroup treatment) is much more like the normal sampling procedure. A numerical example is given for a 3-group calculation with a simplified transport model (two-direction model), demonstrating that the zero-variance scheme can be approximated quite well for this simplified case. (authors)
International Nuclear Information System (INIS)
Frahm, K M; Shepelyansky, D L; Chepelianskii, A D
2012-01-01
We up a directed network tracing links from a given integer to its divisors and analyze the properties of the Google matrix of this network. The PageRank vector of this matrix is computed numerically and it is shown that its probability is approximately inversely proportional to the PageRank index thus being similar to the Zipf law and the dependence established for the World Wide Web. The spectrum of the Google matrix of integers is characterized by a large gap and a relatively small number of nonzero eigenvalues. A simple semi-analytical expression for the PageRank of integers is derived that allows us to find this vector for matrices of billion size. This network provides a new PageRank order of integers. (paper)
Vectorial Resilient PC(l) of Order k Boolean Functions from AG-Codes
Institute of Scientific and Technical Information of China (English)
Hao CHEN; Liang MA; Jianhua LI
2011-01-01
Propagation criteria and resiliency of vectorial Boolean functions are important for cryptographic purpose (see [1- 4, 7, 8, 10, 11, 16]). Kurosawa, Stoh [8] and Carlet [1]gave a construction of Boolean functions satisfying PC(l) of order k from binary linear or nonlinear codes. In this paper, the algebraic-geometric codes over GF(2m) are used to modify the Carlet and Kurosawa-Satoh's construction for giving vectorial resilient Boolean functions satisfying PC(l) of order k criterion. This new construction is compared with previously known results.
A New Grünwald-Letnikov Derivative Derived from a Second-Order Scheme
Directory of Open Access Journals (Sweden)
B. A. Jacobs
2015-01-01
Full Text Available A novel derivation of a second-order accurate Grünwald-Letnikov-type approximation to the fractional derivative of a function is presented. This scheme is shown to be second-order accurate under certain modifications to account for poor accuracy in approximating the asymptotic behavior near the lower limit of differentiation. Some example functions are chosen and numerical results are presented to illustrate the efficacy of this new method over some other popular choices for discretizing fractional derivatives.
DEFF Research Database (Denmark)
Olesen, Frede; Vedsted, Peter; Nielsen, Jørgen Nørskov
1996-01-01
on the age, sex, and numbers of patients per practice was also obtained. The direct standardization method was used to adjust practice populations for age and sex. SETTING: The town of Randers, Aarhus County, Denmark. SUBJECTS: 35 practices, 41 GPs. MAIN OUTCOME MEASURES: Ranking of the 35 practices......OBJECTIVE: To demonstrate whether standardization of practice populations by age and sex changes the internal prescription ranking order of a group of practices. DESIGN: Data on the prescribing of cardiovascular drugs in a group of practices were obtained from a county-based database. Information...
Coding In-depth Semistructured Interviews
DEFF Research Database (Denmark)
Campbell, John L.; Quincy, Charles; Osserman, Jordan
2013-01-01
Many social science studies are based on coded in-depth semistructured interview transcripts. But researchers rarely report or discuss coding reliability in this work. Nor is there much literature on the subject for this type of data. This article presents a procedure for developing coding schemes...... useful for situations where a single knowledgeable coder will code all the transcripts once the coding scheme has been established. This approach can also be used with other types of qualitative data and in other circumstances....
Quantum computation with Turaev-Viro codes
International Nuclear Information System (INIS)
Koenig, Robert; Kuperberg, Greg; Reichardt, Ben W.
2010-01-01
For a 3-manifold with triangulated boundary, the Turaev-Viro topological invariant can be interpreted as a quantum error-correcting code. The code has local stabilizers, identified by Levin and Wen, on a qudit lattice. Kitaev's toric code arises as a special case. The toric code corresponds to an abelian anyon model, and therefore requires out-of-code operations to obtain universal quantum computation. In contrast, for many categories, such as the Fibonacci category, the Turaev-Viro code realizes a non-abelian anyon model. A universal set of fault-tolerant operations can be implemented by deforming the code with local gates, in order to implement anyon braiding. We identify the anyons in the code space, and present schemes for initialization, computation and measurement. This provides a family of constructions for fault-tolerant quantum computation that are closely related to topological quantum computation, but for which the fault tolerance is implemented in software rather than coming from a physical medium.
A comparison of resampling schemes for estimating model observer performance with small ensembles
Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.
2017-09-01
In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.
LABAN-PEL: a two-dimensional, multigroup diffusion, high-order response matrix code
International Nuclear Information System (INIS)
Mueller, E.Z.
1991-06-01
The capabilities of LABAN-PEL is described. LABAN-PEL is a modified version of the two-dimensional, high-order response matrix code, LABAN, written by Lindahl. The new version extends the capabilities of the original code with regard to the treatment of neutron migration by including an option to utilize full group-to-group diffusion coefficient matrices. In addition, the code has been converted from single to double precision and the necessary routines added to activate its multigroup capability. The coding has also been converted to standard FORTRAN-77 to enhance the portability of the code. Details regarding the input data requirements and calculational options of LABAN-PEL are provided. 13 refs
da Silva Figueiredo Celestino Gomes, Priscila; Da Silva, Franck; Bret, Guillaume; Rognan, Didier
2018-01-01
A novel docking challenge has been set by the Drug Design Data Resource (D3R) in order to predict the pose and affinity ranking of a set of Farnesoid X receptor (FXR) agonists, prior to the public release of their bound X-ray structures and potencies. In a first phase, 36 agonists were docked to 26 Protein Data Bank (PDB) structures of the FXR receptor, and next rescored using the in-house developed GRIM method. GRIM aligns protein-ligand interaction patterns of docked poses to those of available PDB templates for the target protein, and rescore poses by a graph matching method. In agreement with results obtained during the previous 2015 docking challenge, we clearly show that GRIM rescoring improves the overall quality of top-ranked poses by prioritizing interaction patterns already visited in the PDB. Importantly, this challenge enables us to refine the applicability domain of the method by better defining the conditions of its success. We notably show that rescoring apolar ligands in hydrophobic pockets leads to frequent GRIM failures. In the second phase, 102 FXR agonists were ranked by decreasing affinity according to the Gibbs free energy of the corresponding GRIM-selected poses, computed by the HYDE scoring function. Interestingly, this fast and simple rescoring scheme provided the third most accurate ranking method among 57 contributions. Although the obtained ranking is still unsuitable for hit to lead optimization, the GRIM-HYDE scoring scheme is accurate and fast enough to post-process virtual screening data.
KEWPIE: a dynamical cascade code for decaying exited compound nuclei
Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David
2003-01-01
A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical...
AptRank: an adaptive PageRank model for protein function prediction on bi-relational graphs.
Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael
2017-06-15
Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Special issue on network coding
Monteiro, Francisco A.; Burr, Alister; Chatzigeorgiou, Ioannis; Hollanti, Camilla; Krikidis, Ioannis; Seferoglu, Hulya; Skachek, Vitaly
2017-12-01
Future networks are expected to depart from traditional routing schemes in order to embrace network coding (NC)-based schemes. These have created a lot of interest both in academia and industry in recent years. Under the NC paradigm, symbols are transported through the network by combining several information streams originating from the same or different sources. This special issue contains thirteen papers, some dealing with design aspects of NC and related concepts (e.g., fountain codes) and some showcasing the application of NC to new services and technologies, such as data multi-view streaming of video or underwater sensor networks. One can find papers that show how NC turns data transmission more robust to packet losses, faster to decode, and more resilient to network changes, such as dynamic topologies and different user options, and how NC can improve the overall throughput. This issue also includes papers showing that NC principles can be used at different layers of the networks (including the physical layer) and how the same fundamental principles can lead to new distributed storage systems. Some of the papers in this issue have a theoretical nature, including code design, while others describe hardware testbeds and prototypes.
Sensitivity Analysis of FEAST-Metal Fuel Performance Code: Initial Results
International Nuclear Information System (INIS)
Edelmann, Paul Guy; Williams, Brian J.; Unal, Cetin; Yacout, Abdellatif
2012-01-01
This memo documents the completion of the LANL milestone, M3FT-12LA0202041, describing methodologies and initial results using FEAST-Metal. The FEAST-Metal code calculations for this work are being conducted at LANL in support of on-going activities related to sensitivity analysis of fuel performance codes. The objective is to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. This report summarizes our preliminary results for the sensitivity analysis using 6 calibration datasets for metallic fuel developed at ANL for EBR-II experiments. Sensitivity ranking methodology was deployed to narrow down the selected parameters for the current study. There are approximately 84 calibration parameters in the FEAST-Metal code, of which 32 were ultimately used in Phase II of this study. Preliminary results of this sensitivity analysis led to the following ranking of FEAST models for future calibration and improvements: fuel conductivity, fission gas transport/release, fuel creep, and precipitation kinetics. More validation data is needed to validate calibrated parameter distributions for future uncertainty quantification studies with FEAST-Metal. Results of this study also served to point out some code deficiencies and possible errors, and these are being investigated in order to determine root causes and to improve upon the existing code models.
Linear source approximation scheme for method of characteristics
International Nuclear Information System (INIS)
Tang Chuntao
2011-01-01
Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)
Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.
Singh, Sanjeet
2016-08-01
Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance
A Generalized Weight-Based Particle-In-Cell Simulation Scheme
International Nuclear Information System (INIS)
Lee, W.W.; Jenkins, T.G.; Ethier, S.
2010-01-01
A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ((delta)f) and the full distribution (full-F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using (delta)f in the linear stage stage of the simulation, while retaining the flexibility of a full-F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.
Blind Reduced-Rank MMSE Detector for DS-CDMA Systems
Directory of Open Access Journals (Sweden)
Xiaodong Cai
2003-01-01
Full Text Available We first develop a reduced-rank minimum mean squared error (MMSE detector for direct-sequence (DS code division multiple access (CDMA by forcing the linear MMSE detector to lie in a signal subspace of a reduced dimension. While a reduced-rank MMSE detector has lower complexity, it cannot outperform the full-rank MMSE detector. We then concentrate on the blind reduced-rank MMSE detector which is obtained from an estimated covariance matrix. Our analysis and simulation results show that when the desired userÃ¢Â€Â²s signal is in a low-dimensional subspace, there exists an optimal subspace so that the blind reduced-rank MMSE detector lying in this subspace has the best performance. By properly choosing a subsspace, we guarantee that the optimal blind reduced-rank MMSE detector is obtained. An adaptive blind reduced-rank MMSE detector, based on a subspace tracking algorithm, is developed. The adaptive blind reduced-rank MMSE detector exhibits superior steady-state performance and fast convergence speed.
Optimization of the two-sample rank Neyman-Pearson detector
Akimov, P. S.; Barashkov, V. M.
1984-10-01
The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.
ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow
Leonard, B. P.; Mokhtari, Simin
1990-01-01
For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.
High Order Tensor Formulation for Convolutional Sparse Coding
Bibi, Adel Aamer
2017-12-25
Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels.
Development of parallel Fokker-Planck code ALLAp
International Nuclear Information System (INIS)
Batishcheva, A.A.; Sigmar, D.J.; Koniges, A.E.
1996-01-01
We report on our ongoing development of the 3D Fokker-Planck code ALLA for a highly collisional scrape-off-layer (SOL) plasma. A SOL with strong gradients of density and temperature in the spatial dimension is modeled. Our method is based on a 3-D adaptive grid (in space, magnitude of the velocity, and cosine of the pitch angle) and a second order conservative scheme. Note that the grid size is typically 100 x 257 x 65 nodes. It was shown in our previous work that only these capabilities make it possible to benchmark a 3D code against a spatially-dependent self-similar solution of a kinetic equation with the Landau collision term. In the present work we show results of a more precise benchmarking against the exact solutions of the kinetic equation using a new parallel code ALLAp with an improved method of parallelization and a modified boundary condition at the plasma edge. We also report first results from the code parallelization using Message Passing Interface for a Massively Parallel CRI T3D platform. We evaluate the ALLAp code performance versus the number of T3D processors used and compare its efficiency against a Work/Data Sharing parallelization scheme and a workstation version
High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains
Fisher, Travis C.; Carpenter, Mark H.
2013-01-01
Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.
Gao, Kaiqiang; Wu, Chongqing; Sheng, Xinzhi; Shang, Chao; Liu, Lanlan; Wang, Jian
2015-09-01
An optical code division multiple access (OCDMA) secure communications system scheme with rapid reconfigurable polarization shift key (Pol-SK) bipolar user code is proposed and demonstrated. Compared to fix code OCDMA, by constantly changing the user code, the performance of anti-eavesdropping is greatly improved. The Pol-SK OCDMA experiment with a 10 Gchip/s user code and a 1.25 Gb/s user data of payload has been realized, which means this scheme has better tolerance and could be easily realized.
Ashyralyyeva, Maral; Ashyraliyev, Maksat
2016-08-01
In the present paper, a second order of accuracy difference scheme for the approximate solution of a source identification problem for hyperbolic-parabolic equations is constructed. Theorem on stability estimates for the solution of this difference scheme and their first and second order difference derivatives is presented. In applications, this abstract result permits us to obtain the stability estimates for the solutions of difference schemes for approximate solutions of two source identification problems for hyperbolic-parabolic equations.
Adaptive Network Coded Clouds: High Speed Downloads and Cost-Effective Version Control
DEFF Research Database (Denmark)
Sipos, Marton A.; Heide, Janus; Roetter, Daniel Enrique Lucani
2018-01-01
Although cloud systems provide a reliable and flexible storage solution, the use of a single cloud service constitutes a single point of failure, which can compromise data availability, download speed, and security. To address these challenges, we advocate for the use of multiple cloud storage...... providers simultaneously using network coding as the key enabling technology. Our goal is to study two challenges of network coded storage systems. First, the efficient update of the number of coded fragments per cloud in a system aggregating multiple clouds in order to boost the download speed of files. We...... developed a novel scheme using recoding with limited packets to trade-off storage space, reliability, and data retrieval speed. Implementation and measurements with commercial cloud providers show that up to 9x less network use is needed compared to other network coding schemes, while maintaining similar...
International Nuclear Information System (INIS)
Balsara, D.S.
1999-01-01
In this paper we analyze some of the numerical issues that are involved in making time-implicit higher-order Godunov schemes for the equations of radiation hydrodynamics (and the Euler or Navier-Stokes equations). This is done primarily with the intent of incorporating such methods in the author's RIEMANN code. After examining the issues it is shown that the construction of a time-implicit higher-order Godunov scheme for radiation hydrodynamics would be benefited by our ability to evaluate exact Jacobians of the numerical flux that is based on Roe-type flux difference splitting. In this paper we show that this can be done analytically in a form that is suitable for efficient computational implementation. It is also shown that when multiple fluid species are used or when multiple radiation frequencies are used the computational cost in the evaluation of the exact Jacobians scales linearly with the number of fluid species or the number of radiation frequencies. Connections are made to other types of numerical fluxes, especially those based on flux difference splittings. It is shown that the evaluation of the exact Jacobian for such numerical fluxes is also benefited by the present strategy and the results given here. It is, however, pointed out that time-implicit schemes that are based on the evaluation of the exact Jacobians for flux difference splittings using the methods developed here are both computationally more efficient and numerically more stable than corresponding time-implicit schemes that are based on the evaluation of the exact or approximate Jacobians for flux vector splittings. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
Construction and decoding of matrix-product codes from nested codes
DEFF Research Database (Denmark)
Hernando, Fernando; Lally, Kristine; Ruano, Diego
2009-01-01
We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
A Distributed Taxation Based Rank Adaptation Scheme for 5G Small Cells
DEFF Research Database (Denmark)
Catania, Davide; Cattoni, Andrea Fabio; Mahmood, Nurul Huda
2015-01-01
The further densification of small cells impose high and undesirable levels of inter-cell interference. Multiple Input Multiple Output (MIMO) systems along with advanced receiver techniques provide us with extra degrees of freedom to combat such a problem. With such tools, rank adaptation...
Acceleration of step and linear discontinuous schemes for the method of characteristics in DRAGON5
Directory of Open Access Journals (Sweden)
Alain Hébert
2017-09-01
Full Text Available The applicability of the algebraic collapsing acceleration (ACA technique to the method of characteristics (MOC in cases with scattering anisotropy and/or linear sources was investigated. Previously, the ACA was proven successful in cases with isotropic scattering and uniform (step sources. A presentation is first made of the MOC implementation, available in the DRAGON5 code. Two categories of schemes are available for integrating the propagation equations: (1 the first category is based on exact integration and leads to the classical step characteristics (SC and linear discontinuous characteristics (LDC schemes and (2 the second category leads to diamond differencing schemes of various orders in space. The acceleration of these MOC schemes using a combination of the generalized minimal residual [GMRES(m] method preconditioned with the ACA technique was focused on. Numerical results are provided for a two-dimensional (2D eight-symmetry pressurized water reactor (PWR assembly mockup in the context of the DRAGON5 code.
A Robust Cross Coding Scheme for OFDM Systems
Shao, X.; Slump, Cornelis H.
2010-01-01
In wireless OFDM-based systems, coding jointly over all the sub-carriers simultaneously performs better than coding separately per sub-carrier. However, the joint coding is not always optimal because its achievable channel capacity (i.e. the maximum data rate) is inversely proportional to the
Verification of the Korsar code on results of experiments executed on the PSB-VVER facility
International Nuclear Information System (INIS)
Roginskaya, V.L.; Pylev, S.S.; Elkin, I.V.
2005-01-01
Full text of publication follows: Paper represents some results of computational research executed within the framework of verification of the KORSAR thermal hydraulic code. This code was designed in the NITI by A.P. Aleksandrov (Russia). The general purpose of the work was development of a nodding scheme of the PSB-VVER integral facility, scheme testing and computational modelling of the experiment 'The PSB-VVER Natural Circulation Test With Stepwise Reduction of the Primary Inventory'. The NC test has been performed within the framework of the OECD PSB-VVER Project (task no. 3). This Project is focused upon the provision of experimental data for codes assessment with regard to VVER analysis. Paper presents a nodding scheme of the PSB-VVER facility and results of pre- and post-test calculations of the specified experiment, obtained with the KORSAR code. The experiment data and the KORSAR pre-test calculation results are in good agreement. A post-test calculation of the experiment with KORSAR code has been performed in order to assess the code capability to simulate the phenomena relevant to the test. The code showed a reasonable prediction of the phenomena measured in the experiment. (authors)
Verification of the Korsar code on results of experiments executed on the PSB-VVER facility
Energy Technology Data Exchange (ETDEWEB)
Roginskaya, V.L.; Pylev, S.S.; Elkin, I.V. [NSI RRC ' Kurchatov Institute' , Kurchatov Sq., 1, Moscow, 123182 (Russian Federation)
2005-07-01
Full text of publication follows: Paper represents some results of computational research executed within the framework of verification of the KORSAR thermal hydraulic code. This code was designed in the NITI by A.P. Aleksandrov (Russia). The general purpose of the work was development of a nodding scheme of the PSB-VVER integral facility, scheme testing and computational modelling of the experiment 'The PSB-VVER Natural Circulation Test With Stepwise Reduction of the Primary Inventory'. The NC test has been performed within the framework of the OECD PSB-VVER Project (task no. 3). This Project is focused upon the provision of experimental data for codes assessment with regard to VVER analysis. Paper presents a nodding scheme of the PSB-VVER facility and results of pre- and post-test calculations of the specified experiment, obtained with the KORSAR code. The experiment data and the KORSAR pre-test calculation results are in good agreement. A post-test calculation of the experiment with KORSAR code has been performed in order to assess the code capability to simulate the phenomena relevant to the test. The code showed a reasonable prediction of the phenomena measured in the experiment. (authors)
Generalized Reduced Rank Tests using the Singular Value Decomposition
F.R. Kleibergen (Frank); R. Paap (Richard)
2003-01-01
textabstractWe propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: necessity of a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson (1951), sensitivity to the ordering of the variables
French ESPN order, codes and nuclear industry requirements
International Nuclear Information System (INIS)
Laugier, C.; Grandemange, J.M.; Cleurennec, M.
2010-01-01
Work on coding safety regulations applicable to large equipment was undertaken in France as of 1978 to accompany the construction of a French nuclear plant. The needs of manufacturers were threefold: translate the design rules from the American licensor, meet the safety objectives expressed in French regulations published at that time through coding of industrial practices (order of February 26, 1974) and stabilize the work reference system between the operator - consultant - and the manufacturer responsible for applying technical recommendations. Significant work was carried out by AFCEN (the French Association for the Design, Construction and Operating Supervision of the equipment for Electronuclear boilers), an association created for this purpose, leading to the publication of a collection of rules related to mechanical equipment for pressurised water reactors, RCC-M and RSE-M, which will be discussed later, and also in several other technical fields: particularly mechanical equipment in fast neutron reactors, RCC-MR, electricity (RCC-E), and fuel (RCC-C). (authors)
Lisman, John E; Jensen, Ole
2013-03-20
Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes. Copyright © 2013 Elsevier Inc. All rights reserved.
Generalized reduced rank tests using the singular value decomposition
Kleibergen, F.R.; Paap, R.
2002-01-01
We propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: necessity of a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson (1951), sensitivity to the ordering of the variables for the LDU
Normal scheme for solving the transport equation independently of spatial discretization
International Nuclear Information System (INIS)
Zamonsky, O.M.
1993-01-01
To solve the discrete ordinates neutron transport equation, a general order nodal scheme is used, where nodes are allowed to have different orders of approximation and the whole system reaches a final order distribution. Independence in the election of system discretization and order of approximation is obtained without loss of accuracy. The final equations and the iterative method to reach a converged order solution were implemented in a two-dimensional computer code to solve monoenergetic, isotropic scattering, external source problems. Two benchmark problems were solved using different automatic selection order methods. Results show accurate solutions without spatial discretization, regardless of the initial selection of distribution order. (author)
A higher order space-time Galerkin scheme for time domain integral equations
Pray, Andrew J.
2014-12-01
Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method\\'s efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.
A higher order space-time Galerkin scheme for time domain integral equations
Pray, Andrew J.; Beghein, Yves; Nair, Naveen V.; Cools, Kristof; Bagci, Hakan; Shanker, Balasubramaniam
2014-01-01
Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method's efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.
KEWPIE: A dynamical cascade code for decaying exited compound nuclei
Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David
2004-05-01
A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is of the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical and dynamical observables. Results are successfully compared to experimental data.
A Heuristic Hierarchical Scheme for Academic Search and Retrieval
DEFF Research Database (Denmark)
Amolochitis, Emmanouil; Christou, Ioannis T.; Tan, Zheng-Hua
2013-01-01
and a graph-theoretic computed score that relates the paper’s index terms with each other. We designed and developed a meta-search engine that submits user queries to standard digital repositories of academic publications and re-ranks the repository results using the hierarchical heuristic scheme. We evaluate......, and by more than 907.5% in terms of LEX. We also re-rank the top-10 results of a subset of the original 58 user queries produced by Google Scholar, Microsoft Academic Search, and ArnetMiner; the results show that PubSearch compares very well against these search engines as well. The proposed scheme can...... be easily plugged in any existing search engine for retrieval of academic publications....
Directory of Open Access Journals (Sweden)
Félix Gontier
2017-11-01
Full Text Available The spreading of urban areas and the growth of human population worldwide raise societal and environmental concerns. To better address these concerns, the monitoring of the acoustic environment in urban as well as rural or wilderness areas is an important matter. Building on the recent development of low cost hardware acoustic sensors, we propose in this paper to consider a sensor grid approach to tackle this issue. In this kind of approach, the crucial question is the nature of the data that are transmitted from the sensors to the processing and archival servers. To this end, we propose an efficient audio coding scheme based on third octave band spectral representation that allows: (1 the estimation of standard acoustic indicators; and (2 the recognition of acoustic events at state-of-the-art performance rate. The former is useful to provide quantitative information about the acoustic environment, while the latter is useful to gather qualitative information and build perceptually motivated indicators using for example the emergence of a given sound source. The coding scheme is also demonstrated to transmit spectrally encoded data that, reverted to the time domain using state-of-the-art techniques, are not intelligible, thus protecting the privacy of citizens.
A model-based approach to operational event groups ranking
Energy Technology Data Exchange (ETDEWEB)
Simic, Zdenko [European Commission Joint Research Centre, Petten (Netherlands). Inst. for Energy and Transport; Maqua, Michael [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany); Wattrelos, Didier [Institut de Radioprotection et de Surete Nucleaire (IRSN), Fontenay-aux-Roses (France)
2014-04-15
The operational experience (OE) feedback provides improvements in all industrial activities. Identification of the most important and valuable groups of events within accumulated experience is important in order to focus on a detailed investigation of events. The paper describes the new ranking method and compares it with three others. Methods have been described and applied to OE events utilised by nuclear power plants in France and Germany for twenty years. The results show that different ranking methods only roughly agree on which of the event groups are the most important ones. In the new ranking method the analytical hierarchy process is applied in order to assure consistent and comprehensive weighting determination for ranking indexes. The proposed method allows a transparent and flexible event groups ranking and identification of the most important OE for further more detailed investigation in order to complete the feedback. (orig.)
International Nuclear Information System (INIS)
Xing Yulong; Shu Chiwang
2006-01-01
Hyperbolic balance laws have steady state solutions in which the flux gradients are nonzero but are exactly balanced by the source term. In our earlier work [J. Comput. Phys. 208 (2005) 206-227; J. Sci. Comput., accepted], we designed a well-balanced finite difference weighted essentially non-oscillatory (WENO) scheme, which at the same time maintains genuine high order accuracy for general solutions, to a class of hyperbolic systems with separable source terms including the shallow water equations, the elastic wave equation, the hyperbolic model for a chemosensitive movement, the nozzle flow and a two phase flow model. In this paper, we generalize high order finite volume WENO schemes and Runge-Kutta discontinuous Galerkin (RKDG) finite element methods to the same class of hyperbolic systems to maintain a well-balanced property. Finite volume and discontinuous Galerkin finite element schemes are more flexible than finite difference schemes to treat complicated geometry and adaptivity. However, because of a different computational framework, the maintenance of the well-balanced property requires different technical approaches. After the description of our well-balanced high order finite volume WENO and RKDG schemes, we perform extensive one and two dimensional simulations to verify the properties of these schemes such as the exact preservation of the balance laws for certain steady state solutions, the non-oscillatory property for general solutions with discontinuities, and the genuine high order accuracy in smooth regions
One-step trinary signed-digit arithmetic using an efficient encoding scheme
Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.
2000-11-01
The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
Development and application of a third order scheme of finite differences centered in mesh
International Nuclear Information System (INIS)
Delfin L, A.; Alonso V, G.; Valle G, E. del
2003-01-01
In this work the development of a third order scheme of finite differences centered in mesh is presented and it is applied in the numerical solution of those diffusion equations in multi groups in stationary state and X Y geometry. Originally this scheme was developed by Hennart and del Valle for the monoenergetic diffusion equation with a well-known source and they show that the one scheme is of third order when comparing the numerical solution with the analytical solution of a model problem using several mesh refinements and boundary conditions. The scheme by them developed it also introduces the application of numeric quadratures to evaluate the rigidity matrices and of mass that its appear when making use of the finite elements method of Galerkin. One of the used quadratures is the open quadrature of 4 points, no-standard, of Newton-Cotes to evaluate in approximate form the elements of the rigidity matrices. The other quadrature is that of 3 points of Radau that it is used to evaluate the elements of all the mass matrices. One of the objectives of these quadratures are to eliminate the couplings among the Legendre moments 0 and 1 associated to the left and right faces as those associated to the inferior and superior faces of each cell of the discretization. The other objective is to satisfy the particles balance in weighed form in each cell. In this work it expands such development to multiplicative means considering several energy groups. There are described diverse details inherent to the technique, particularly those that refer to the simplification of the algebraic systems that appear due to the space discretization. Numerical results for several test problems are presented and are compared with those obtained with other nodal techniques. (Author)
Low-rank and sparse modeling for visual analysis
Fu, Yun
2014-01-01
This book provides a view of low-rank and sparse computing, especially approximation, recovery, representation, scaling, coding, embedding and learning among unconstrained visual data. The book includes chapters covering multiple emerging topics in this new field. It links multiple popular research fields in Human-Centered Computing, Social Media, Image Classification, Pattern Recognition, Computer Vision, Big Data, and Human-Computer Interaction. Contains an overview of the low-rank and sparse modeling techniques for visual analysis by examining both theoretical analysis and real-world applic
COSY 5.0 - the fifth order code for corpuscular optical systems
International Nuclear Information System (INIS)
Berz, M.; Hoffmann, H.C.; Wollnik, H.
1987-01-01
COSY 5.0 is a new computer code for the design of corpuscular optical systems based on the principle of transfer matrices. The particle optical calculations include all image aberrations through fifth order. COSY 5.0 uses canonical coordinates and exploits the symplectic condition to increase the speed of computation. COSY 5.0 contains a library for the computation of matrix elements of all commonly used corpuscular optical elements such as electric and magnetic multipoles and sector fields. The corresponding formulas were generated algebraically by the computer code HAMILTON. Care was taken that the optimization of optical elements is achieved with minimal numerical effort. Finally COSY 5.0 has a very general mnemonic input code resembling a higher programming language. (orig.)
A Case-Based Reasoning Method with Rank Aggregation
Sun, Jinhua; Du, Jiao; Hu, Jian
2018-03-01
In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.
Zhu, Jun; Shu, Chi-Wang
2017-11-01
A new class of high order weighted essentially non-oscillatory (WENO) schemes (Zhu and Qiu, 2016, [50]) is applied to solve Euler equations with steady state solutions. It is known that the classical WENO schemes (Jiang and Shu, 1996, [23]) might suffer from slight post-shock oscillations. Even though such post-shock oscillations are small enough in magnitude and do not visually affect the essentially non-oscillatory property, they are truly responsible for the residue to hang at a truncation error level instead of converging to machine zero. With the application of this new class of WENO schemes, such slight post-shock oscillations are essentially removed and the residue can settle down to machine zero in steady state simulations. This new class of WENO schemes uses a convex combination of a quartic polynomial with two linear polynomials on unequal size spatial stencils in one dimension and is extended to two dimensions in a dimension-by-dimension fashion. By doing so, such WENO schemes use the same information as the classical WENO schemes in Jiang and Shu (1996) [23] and yield the same formal order of accuracy in smooth regions, yet they could converge to steady state solutions with very tiny residue close to machine zero for our extensive list of test problems including shocks, contact discontinuities, rarefaction waves or their interactions, and with these complex waves passing through the boundaries of the computational domain.
Recent advances in neutral particle transport methods and codes
International Nuclear Information System (INIS)
Azmy, Y.Y.
1996-01-01
An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned
Settle, Sean O.
2013-01-01
The primary aim of this paper is to answer the question, What are the highest-order five- or nine-point compact finite difference schemes? To answer this question, we present several simple derivations of finite difference schemes for the one- and two-dimensional Poisson equation on uniform, quasi-uniform, and nonuniform face-to-face hyperrectangular grids and directly prove the existence or nonexistence of their highest-order local accuracies. Our derivations are unique in that we do not make any initial assumptions on stencil symmetries or weights. For the one-dimensional problem, the derivation using the three-point stencil on both uniform and nonuniform grids yields a scheme with arbitrarily high-order local accuracy. However, for the two-dimensional problem, the derivation using the corresponding five-point stencil on uniform and quasi-uniform grids yields a scheme with at most second-order local accuracy, and on nonuniform grids yields at most first-order local accuracy. When expanding the five-point stencil to the nine-point stencil, the derivation using the nine-point stencil on uniform grids yields at most sixth-order local accuracy, but on quasi- and nonuniform grids yields at most fourth- and third-order local accuracy, respectively. © 2013 Society for Industrial and Applied Mathematics.
Energy Technology Data Exchange (ETDEWEB)
López, R., E-mail: ralope1@ing.uc3m.es; Lecuona, A., E-mail: lecuona@ing.uc3m.es; Nogueira, J., E-mail: goriba@ing.uc3m.es; Vereda, C., E-mail: cvereda@ing.uc3m.es
2017-03-15
Highlights: • A two-phase flows numerical algorithm with high order temporal schemes is proposed. • Transient solutions route depends on the temporal high order scheme employed. • ESDIRK scheme for two-phase flows events exhibits high computational performance. • Computational implementation of the ESDIRK scheme can be done in a very easy manner. - Abstract: An extension for 1-D transient two-phase flows of the SIMPLE-ESDIRK method, initially developed for incompressible viscous flows by Ijaz is presented. This extension is motivated by the high temporal order of accuracy demanded to cope with fast phase change events. This methodology is suitable for boiling heat exchangers, solar thermal receivers, etc. The methodology of the solution consist in a finite volume staggered grid discretization of the governing equations in which the transient terms are treated with the explicit first stage singly diagonally implicit Runge-Kutta (ESDIRK) method. It is suitable for stiff differential equations, present in instant boiling or condensation processes. It is combined with the semi-implicit pressure linked equations algorithm (SIMPLE) for the calculation of the pressure field. The case of study consists of the numerical reproduction of the Bartolomei upward boiling pipe flow experiment. The steady-state validation of the numerical algorithm is made against these experimental results and well known numerical results for that experiment. In addition, a detailed study reveals the benefits over the first order Euler Backward method when applying 3rd and 4th order schemes, making emphasis in the behaviour when the system is subjected to periodic square wave wall heat function disturbances, concluding that the use of the ESDIRK method in two-phase calculations presents remarkable accuracy and computational advantages.
Kimura, Michio; Kuranishi, Makoto; Sukenobu, Yoshiharu; Watanabe, Hiroki; Tani, Shigeki; Sakusabe, Takaya; Nakajima, Takashi; Morimura, Shinya; Kabata, Shun
2002-06-01
The digital imaging and communications in medicine (DICOM) standard includes parts regarding nonimage data information, such as image study ordering data and performed procedure data, and is used for sharing information between HIS/RIS and modality systems, which is essential for IHE. To bring such parts of the DICOM standard into force in Japan, a joint committee of JIRA and JAHIS established the JJ1017 management guideline, specifying, for example, which items are legally required in Japan, while remaining optional in the DICOM standard. In Japan, the contents of orders from referring physicians for radiographic examinations include details of the examination. Such details are not used typically by referring physicians requesting radiographic examinations in the United States, because radiologists in the United States often determine the examination protocol. The DICOM standard has code tables for examination type, region, and direction for image examination orders. However, this investigation found that it does not include items that are detailed sufficiently for use in Japan, because of the above-mentioned reason. To overcome these drawbacks, we have generated the JJ1017 code for these 3 codes for use based on the JJ1017 guidelines. This report introduces the JJ1017 code. These codes (the study type codes in particular) must be expandable to keep up with technical advances in equipment. Expansion has 2 directions: width for covering more categories and depth for specifying the information in more detail (finer categories). The JJ1017 code takes these requirements into consideration and clearly distinguishes between the stem part as the common term and the expansion. The stem part of the JJ1017 code partially utilizes the DICOM codes to remain in line with the DICOM standard. This work is an example of how local requirements can be met by using the DICOM standard and extending it.
Ranking economic history journals
DEFF Research Database (Denmark)
Di Vaio, Gianfranco; Weisdorf, Jacob Louis
2010-01-01
This study ranks-for the first time-12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We also...... compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential for economic...... history, and that, although economic history is quite independent from economics as a whole, knowledge exchange between the two fields is indeed going on....
Ranking Economic History Journals
DEFF Research Database (Denmark)
Di Vaio, Gianfranco; Weisdorf, Jacob Louis
This study ranks - for the first time - 12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We...... also compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential...... for economic history, and that, although economic history is quite independent from economics as a whole, knowledge exchange between the two fields is indeed going on....
High-Order Multioperator Compact Schemes for Numerical Simulation of Unsteady Subsonic Airfoil Flow
Savel'ev, A. D.
2018-02-01
On the basis of high-order schemes, the viscous gas flow over the NACA2212 airfoil is numerically simulated at a free-stream Mach number of 0.3 and Reynolds numbers ranging from 103 to 107. Flow regimes sequentially varying due to variations in the free-stream viscosity are considered. Vortex structures developing on the airfoil surface are investigated, and a physical interpretation of this phenomenon is given.
Energy Technology Data Exchange (ETDEWEB)
Pourgol-Mohammad, Mohammad, E-mail: pourgolmohammad@sut.ac.ir [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of); Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Hoseyni, Seyed Mojtaba [Building & Housing Research Center, Tehran (Iran, Islamic Republic of); Sepanloo, Kamran [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of)
2016-08-15
Highlights: • Existing uncertainty ranking methods prove inconsistent for TH applications. • Introduction of a new method for ranking sources of uncertainty in TH codes. • Modified PIRT qualitatively identifies and ranks uncertainty sources more precisely. • The importance of parameters is calculated by a limited number of TH code executions. • Methodology is applied successfully on LOFT-LB1 test facility. - Abstract: In application to thermal–hydraulic calculations by system codes, sensitivity analysis plays an important role for managing the uncertainties of code output and risk analysis. Sensitivity analysis is also used to confirm the results of qualitative Phenomena Identification and Ranking Table (PIRT). Several methodologies have been developed to address uncertainty importance assessment. Generally, uncertainty importance measures, mainly devised for the Probabilistic Risk Assessment (PRA) applications, are not affordable for computationally demanding calculations of the complex thermal–hydraulics (TH) system codes. In other words, for effective quantification of the degree of the contribution of each phenomenon to the total uncertainty of the output, a practical approach is needed by considering high computational burden of TH calculations. This study aims primarily to show the inefficiency of the existing approaches and then introduces a solution to cope with the challenges in this area by modification of variance-based uncertainty importance method. Important parameters are identified by the modified PIRT approach qualitatively then their uncertainty importance is quantified by a local derivative index. The proposed index is attractive from its practicality point of view on TH applications. It is capable of calculating the importance of parameters by a limited number of TH code executions. Application of the proposed methodology is demonstrated on LOFT-LB1 test facility.
International Nuclear Information System (INIS)
Pourgol-Mohammad, Mohammad; Hoseyni, Seyed Mohsen; Hoseyni, Seyed Mojtaba; Sepanloo, Kamran
2016-01-01
Highlights: • Existing uncertainty ranking methods prove inconsistent for TH applications. • Introduction of a new method for ranking sources of uncertainty in TH codes. • Modified PIRT qualitatively identifies and ranks uncertainty sources more precisely. • The importance of parameters is calculated by a limited number of TH code executions. • Methodology is applied successfully on LOFT-LB1 test facility. - Abstract: In application to thermal–hydraulic calculations by system codes, sensitivity analysis plays an important role for managing the uncertainties of code output and risk analysis. Sensitivity analysis is also used to confirm the results of qualitative Phenomena Identification and Ranking Table (PIRT). Several methodologies have been developed to address uncertainty importance assessment. Generally, uncertainty importance measures, mainly devised for the Probabilistic Risk Assessment (PRA) applications, are not affordable for computationally demanding calculations of the complex thermal–hydraulics (TH) system codes. In other words, for effective quantification of the degree of the contribution of each phenomenon to the total uncertainty of the output, a practical approach is needed by considering high computational burden of TH calculations. This study aims primarily to show the inefficiency of the existing approaches and then introduces a solution to cope with the challenges in this area by modification of variance-based uncertainty importance method. Important parameters are identified by the modified PIRT approach qualitatively then their uncertainty importance is quantified by a local derivative index. The proposed index is attractive from its practicality point of view on TH applications. It is capable of calculating the importance of parameters by a limited number of TH code executions. Application of the proposed methodology is demonstrated on LOFT-LB1 test facility.
ATHENA code manual. Volume 1. Code structure, system models, and solution methods
International Nuclear Information System (INIS)
Carlson, K.E.; Roth, P.A.; Ransom, V.H.
1986-09-01
The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation
Wehde, M. E.
1995-01-01
The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.
Subtracting a best rank-1 approximation may increase tensor rank
Stegeman, Alwin; Comon, Pierre
2010-01-01
It has been shown that a best rank-R approximation of an order-k tensor may not exist when R >= 2 and k >= 3. This poses a serious problem to data analysts using tensor decompositions it has been observed numerically that, generally, this issue cannot be solved by consecutively computing and
Bilyeu, David
This dissertation presents an extension of the Conservation Element Solution Element (CESE) method from second- to higher-order accuracy. The new method retains the favorable characteristics of the original second-order CESE scheme, including (i) the use of the space-time integral equation for conservation laws, (ii) a compact mesh stencil, (iii) the scheme will remain stable up to a CFL number of unity, (iv) a fully explicit, time-marching integration scheme, (v) true multidimensionality without using directional splitting, and (vi) the ability to handle two- and three-dimensional geometries by using unstructured meshes. This algorithm has been thoroughly tested in one, two and three spatial dimensions and has been shown to obtain the desired order of accuracy for solving both linear and non-linear hyperbolic partial differential equations. The scheme has also shown its ability to accurately resolve discontinuities in the solutions. Higher order unstructured methods such as the Discontinuous Galerkin (DG) method and the Spectral Volume (SV) methods have been developed for one-, two- and three-dimensional application. Although these schemes have seen extensive development and use, certain drawbacks of these methods have been well documented. For example, the explicit versions of these two methods have very stringent stability criteria. This stability criteria requires that the time step be reduced as the order of the solver increases, for a given simulation on a given mesh. The research presented in this dissertation builds upon the work of Chang, who developed a fourth-order CESE scheme to solve a scalar one-dimensional hyperbolic partial differential equation. The completed research has resulted in two key deliverables. The first is a detailed derivation of a high-order CESE methods on unstructured meshes for solving the conservation laws in two- and three-dimensional spaces. The second is the code implementation of these numerical methods in a computer code. For
GAIA: A 2-D Curvilinear moving grid hydrodynamic code
International Nuclear Information System (INIS)
Jourdren, H.
1987-02-01
The GAIA computer code is developed for time dependent, compressible, multimaterial fluid flow problems, to overcome some drawbacks of traditional 2-D Lagrangian codes. The initial goals of robustness, entropy accuracies, efficiency in presence of large interfacial slip, have already been achieved. The general GODUNOV approach is applied to an arbitrary time varying control-volume formulation. We review in this paper the Riemann solver, the GODUNOV cartesian and curvilinear moving grid schemes and an efficient grid generation algorithm. We finally outline a possible second order accuracy extension
Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission
Directory of Open Access Journals (Sweden)
Tarek Chehade
2015-01-01
Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.
International Nuclear Information System (INIS)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2015-01-01
Highlights: • Using high-resolution spatial scheme in solving two-phase flow problems. • Fully implicit time integrations scheme. • Jacobian-free Newton–Krylov method. • Analytical solution for two-phase water faucet problem. - Abstract: The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists
Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José
2013-11-18
We present a high-order UWB pulses generator based on a microwave photonic filter which provides a set of positive and negative samples by using the slicing of an incoherent optical source and the phase inversion in a Mach-Zehnder modulator. The simple scalability and high reconfigurability of the system permit a better accomplishment of the FCC requirements. Moreover, the proposed scheme permits an easy adaptation to pulse amplitude modulation, bi phase modulation, pulse shape modulation and pulse position modulation. The flexibility of the scheme for being adaptable to multilevel modulation formats permits to increase the transmission bit rate by using hybrid modulation formats.
Universal scaling in sports ranking
International Nuclear Information System (INIS)
Deng Weibing; Li Wei; Cai Xu; Bulou, Alain; Wang Qiuping A
2012-01-01
Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters. (paper)
Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images
Directory of Open Access Journals (Sweden)
Barni Mauro
2007-01-01
Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.
JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL
Institute of Scientific and Technical Information of China (English)
Xiao Jiang; Wu Chengke
2005-01-01
In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.
Review of Rateless-Network-Coding-Based Packet Protection in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
A. S. Abdullah
2015-01-01
Full Text Available In recent times, there have been many developments in wireless sensor network (WSN technologies using coding theory. Fast and efficient protection schemes for data transfer over the WSN are some of the issues in coding theory. This paper reviews the issues related to the application of the joint rateless-network coding (RNC within the WSN in the context of packet protection. The RNC is a method in which any node in the network is allowed to encode and decode the transmitted data in order to construct a robust network, improve network throughput, and decrease delays. To the best of our knowledge, there has been no comprehensive discussion about RNC. To begin with, this paper briefly describes the concept of packet protection using network coding and rateless codes. We therefore discuss the applications of RNC for improving the capability of packet protection. Several works related to this issue are discussed. Finally, the paper concludes that the RNC-based packet protection scheme is able to improve the packet reception rate and suggests future studies to enhance the capability of RNC protection.
Compression and channel-coding algorithms for high-definition television signals
Alparone, Luciano; Benelli, Giuliano; Fabbri, A. F.
1990-09-01
In this paper results of investigations about the effects of channel errors in the transmission of images compressed by means of techniques based on Discrete Cosine Transform (DOT) and Vector Quantization (VQ) are presented. Since compressed images are heavily degraded by noise in the transmission channel more seriously for what concern VQ-coded images theoretical studies and simulations are presented in order to define and evaluate this degradation. Some channel coding schemes are proposed in order to protect information during transmission. Hamming codes (7 (15 and (31 have been used for DCT-compressed images more powerful codes such as Golay (23 for VQ-compressed images. Performances attainable with softdecoding techniques are also evaluated better quality images have been obtained than using classical hard decoding techniques. All tests have been carried out to simulate the transmission of a digital image from HDTV signal over an AWGN channel with P5K modulation.
A Statistically-Hiding Integer Commitment Scheme Based on Groups with Hidden Order
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Fujisaki, Eiichiro
2002-01-01
We present a statistically-hiding commitment scheme allowing commitment to arbitrary size integers, based on any (Abelian) group with certain properties, most importantly, that it is hard for the committer to compute its order. We also give efficient zero-knowledge protocols for proving knowledge...... input is chosen by the (possibly cheating) prover. - - Our results apply to any group with suitable properties. In particular, they apply to a much larger class of RSA moduli than the safe prime products proposed in [14] - Potential examples include RSA moduli, class groups and, with a slight...
Freudenthal ranks: GHZ versus W
International Nuclear Information System (INIS)
Borsten, L
2013-01-01
The Hilbert space of three-qubit pure states may be identified with a Freudenthal triple system. Every state has an unique Freudenthal rank ranging from 1 to 4, which is determined by a set of automorphism group covariants. It is shown here that the optimal success rates for winning a three-player non-local game, varying over all local strategies, are strictly ordered by the Freudenthal rank of the shared three-qubit resource. (paper)
Differential invariants for higher-rank tensors. A progress report
International Nuclear Information System (INIS)
Tapial, V.
2004-07-01
We outline the construction of differential invariants for higher-rank tensors. In section 2 we outline the general method for the construction of differential invariants. A first result is that the simplest tensor differential invariant contains derivatives of the same order as the rank of the tensor. In section 3 we review the construction for the first-rank tensors (vectors) and second-rank tensors (metrics). In section 4 we outline the same construction for higher-rank tensors. (author)
Balsara, Dinshaw S.; Dumbser, Michael
2015-10-01
Several advances have been reported in the recent literature on divergence-free finite volume schemes for Magnetohydrodynamics (MHD). Almost all of these advances are restricted to structured meshes. To retain full geometric versatility, however, it is also very important to make analogous advances in divergence-free schemes for MHD on unstructured meshes. Such schemes utilize a staggered Yee-type mesh, where all hydrodynamic quantities (mass, momentum and energy density) are cell-centered, while the magnetic fields are face-centered and the electric fields, which are so useful for the time update of the magnetic field, are centered at the edges. Three important advances are brought together in this paper in order to make it possible to have high order accurate finite volume schemes for the MHD equations on unstructured meshes. First, it is shown that a divergence-free WENO reconstruction of the magnetic field can be developed for unstructured meshes in two and three space dimensions using a classical cell-centered WENO algorithm, without the need to do a WENO reconstruction for the magnetic field on the faces. This is achieved via a novel constrained L2-projection operator that is used in each time step as a postprocessor of the cell-centered WENO reconstruction so that the magnetic field becomes locally and globally divergence free. Second, it is shown that recently-developed genuinely multidimensional Riemann solvers (called MuSIC Riemann solvers) can be used on unstructured meshes to obtain a multidimensionally upwinded representation of the electric field at each edge. Third, the above two innovations work well together with a high order accurate one-step ADER time stepping strategy, which requires the divergence-free nonlinear WENO reconstruction procedure to be carried out only once per time step. The resulting divergence-free ADER-WENO schemes with MuSIC Riemann solvers give us an efficient and easily-implemented strategy for divergence-free MHD on
Directory of Open Access Journals (Sweden)
Hua Wang
2016-01-01
Full Text Available This paper proposes a new fractional-order approach for synchronization of a class of fractional-order chaotic systems in the presence of model uncertainties and external disturbances. A simple but practical method to synchronize many familiar fractional-order chaotic systems has been put forward. A new theorem is proposed for a class of cascade fractional-order systems and it is applied in chaos synchronization. Combined with the fact that the states of the fractional chaotic systems are bounded, many coupled items can be taken as zero items. Then, the whole system can be simplified greatly and a simpler controller can be derived. Finally, the validity of the presented scheme is illustrated by numerical simulations of the fractional-order unified system.
Finite difference schemes for second order systems describing black holes
International Nuclear Information System (INIS)
Motamed, Mohammad; Kreiss, H-O.; Babiuc, M.; Winicour, J.; Szilagyi, B.
2006-01-01
In the harmonic description of general relativity, the principal part of Einstein's equations reduces to 10 curved space wave equations for the components of the space-time metric. We present theorems regarding the stability of several evolution-boundary algorithms for such equations when treated in second order differential form. The theorems apply to a model black hole space-time consisting of a spacelike inner boundary excising the singularity, a timelike outer boundary and a horizon in between. These algorithms are implemented as stable, convergent numerical codes and their performance is compared in a 2-dimensional excision problem
Ranking of risk significant components for the Davis-Besse Component Cooling Water System
International Nuclear Information System (INIS)
Seniuk, P.J.
1994-01-01
Utilities that run nuclear power plants are responsible for testing pumps and valves, as specified by the American Society of Mechanical Engineers (ASME) that are required for safe shutdown, mitigating the consequences of an accident, and maintaining the plant in a safe condition. These inservice components are tested according to ASME Codes, either the earlier requirements of the ASME Boiler and Pressure Vessel Code, Section XI, or the more recent requirements of the ASME Operation and Maintenance Code, Section IST. These codes dictate test techniques and frequencies regardless of the component failure rate or significance of failure consequences. A probabilistic risk assessment or probabilistic safety assessment may be used to evaluate the component importance for inservice test (IST) risk ranking, which is a combination of failure rate and failure consequences. Resources for component testing during the normal quarterly verification test or postmaintenance test are expensive. Normal quarterly testing may cause component unavailability. Outage testing may increase outage cost with no real benefit. This paper identifies the importance ranking of risk significant components in the Davis-Besse component cooling water system. Identifying the ranking of these risk significant IST components adds technical insight for developing the appropriate test technique and test frequency
Ranking accounting, banking and finance journals: A note
Halkos, George; Tzeremes, Nickolaos
2012-01-01
This paper by applying Data Envelopment Analysis (DEA) ranks Economics journals in the field of Accounting, Banking and Finance. By using one composite input and one composite output the paper ranks 57 journals. In addition for the first time three different quality ranking reports have been incorporated to the DEA modelling problem in order to classify the journals into four categories (‘A’ to ‘D’). The results reveal that the journals with the highest rankings in the field are Journal of Fi...
Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications
Douik, Ahmed S.
2015-05-01
From its introduction to its quindecennial, network coding have built a strong reputation in enhancing packet recovery process and achieving maximum information flow in both wires and wireless networks. Traditional studies focused on optimizing the throughput of the network by proposing complex schemes that achieve optimal delay. With the shift toward distributed computing at mobile devices, throughput and complexity become both critical factors that affect the efficiency of a coding scheme. Instantly decodable network coding imposed itself as a new paradigm in network coding that trades off this two aspects. This paper presents a survey of instantly decodable network coding schemes that are proposed in the literature. The various schemes are identified, categorized and evaluated. Two categories can be distinguished namely the conventional centralized schemes and the distributed or cooperative schemes. For each scheme, the comparison is carried out in terms of reliability, performance, complexity and packet selection methodology. Although the performance is generally inversely proportional to the computation complexity, numerous successful schemes from both the performance and complexity viewpoint are identified.
Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications
Douik, Ahmed S.
2015-01-01
From its introduction to its quindecennial, network coding have built a strong reputation in enhancing packet recovery process and achieving maximum information flow in both wires and wireless networks. Traditional studies focused on optimizing the throughput of the network by proposing complex schemes that achieve optimal delay. With the shift toward distributed computing at mobile devices, throughput and complexity become both critical factors that affect the efficiency of a coding scheme. Instantly decodable network coding imposed itself as a new paradigm in network coding that trades off this two aspects. This paper presents a survey of instantly decodable network coding schemes that are proposed in the literature. The various schemes are identified, categorized and evaluated. Two categories can be distinguished namely the conventional centralized schemes and the distributed or cooperative schemes. For each scheme, the comparison is carried out in terms of reliability, performance, complexity and packet selection methodology. Although the performance is generally inversely proportional to the computation complexity, numerous successful schemes from both the performance and complexity viewpoint are identified.
Energy Technology Data Exchange (ETDEWEB)
Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua [SJTU-CU International Cooperative Research Center, Department of Engineering Mechanics, School of Naval Architecture Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Bai, Wenjia; Shi, Wenzhe; Rueckert, Daniel [Biomedical Image Analysis Group, Department of Computing, Imperial College London, 180 Queens Gate, London SW7 2AZ (United Kingdom); Song, Jingjing; Zhan, Songhua [Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai 201203 (China); Lian, Yanyun [Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210 (China)
2015-07-15
Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve
Ranking health between countries in international comparisons
DEFF Research Database (Denmark)
Brønnum-Hansen, Henrik
2014-01-01
Cross-national comparisons and ranking of summary measures of population health sometimes give rise to inconsistent and diverging conclusions. In order to minimise confusion, international comparative studies ought to be based on well-harmonised data with common standards of definitions and docum......Cross-national comparisons and ranking of summary measures of population health sometimes give rise to inconsistent and diverging conclusions. In order to minimise confusion, international comparative studies ought to be based on well-harmonised data with common standards of definitions...
High dynamic range coding imaging system
Wu, Renfan; Huang, Yifan; Hou, Guangqi
2014-10-01
We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.
Row Reduction Applied to Decoding of Rank Metric and Subspace Codes
DEFF Research Database (Denmark)
Puchinger, Sven; Nielsen, Johan Sebastian Rosenkilde; Li, Wenhui
2017-01-01
We show that decoding of ℓ-Interleaved Gabidulin codes, as well as list-ℓ decoding of Mahdavifar–Vardy (MV) codes can be performed by row reducing skew polynomial matrices. Inspired by row reduction of F[x] matrices, we develop a general and flexible approach of transforming matrices over skew...... polynomial rings into a certain reduced form. We apply this to solve generalised shift register problems over skew polynomial rings which occur in decoding ℓ-Interleaved Gabidulin codes. We obtain an algorithm with complexity O(ℓμ2) where μ measures the size of the input problem and is proportional...... to the code length n in the case of decoding. Further, we show how to perform the interpolation step of list-ℓ-decoding MV codes in complexity O(ℓn2), where n is the number of interpolation constraints....
Development and first application of an operating events ranking tool
International Nuclear Information System (INIS)
Šimić, Zdenko; Zerger, Benoit; Banov, Reni
2015-01-01
Highlights: • A method using analitycal hierarchy process for ranking operating events is developed and tested. • The method is applied for 5 years of U.S. NRC Licensee Event Reports (1453 events). • Uncertainty and sensitivity of the ranking results are evaluated. • Real events assessment shows potential of the method for operating experience feedback. - Abstract: The operating experience feedback is important for maintaining and improving safety and availability in nuclear power plants. Detailed investigation of all events is challenging since it requires excessive resources, especially in case of large event databases. This paper presents an event groups ranking method to complement the analysis of individual operating events. The basis for the method is the use of an internationally accepted events characterization scheme that allows different ways of events grouping and ranking. The ranking method itself consists of implementing the analytical hierarchy process (AHP) by means of a custom developed tool which allows events ranking based on ranking indexes pre-determined by expert judgment. Following the development phase, the tool was applied to analyze a complete set of 5 years of real nuclear power plants operating events (1453 events). The paper presents the potential of this ranking method to identify possible patterns throughout the event database and therefore to give additional insights into the events as well as to give quantitative input for the prioritization of further more detailed investigation of selected event groups
Relative Performance Information, Rank Ordering and Employee Performance: A Research Note
Kramer, S.; Maas, V.S.; van Rinsum, M.
2016-01-01
We conduct a laboratory experiment to examine whether the provision of detailed relative performance information (i.e., information about the specific performance levels of peers) affects employee performance. We also investigate how – if at all – explicit ranking of performance levels affects how
The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE
Vandenbroucke, B.; Wood, K.
2018-04-01
We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.
Distributed Video Coding: Iterative Improvements
DEFF Research Database (Denmark)
Luong, Huynh Van
Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...
Efficient tensor completion for color image and video recovery: Low-rank tensor train
Bengua, Johann A.; Phien, Ho N.; Tuan, Hoang D.; Do, Minh N.
2016-01-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via tensor tra...
Hybrid Video Coding Based on Bidimensional Matching Pursuit
Directory of Open Access Journals (Sweden)
Lorenzo Granai
2004-12-01
Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.
Haussaire, J.-M.; Bocquet, M.
2015-08-01
Bocquet and Sakov (2013) have introduced a low-order model based on the coupling of the chaotic Lorenz-95 model which simulates winds along a mid-latitude circle, with the transport of a tracer species advected by this zonal wind field. This model, named L95-T, can serve as a playground for testing data assimilation schemes with an online model. Here, the tracer part of the model is extended to a reduced photochemistry module. This coupled chemistry meteorology model (CCMM), the L95-GRS model, mimics continental and transcontinental transport and the photochemistry of ozone, volatile organic compounds and nitrogen oxides. Its numerical implementation is described. The model is shown to reproduce the major physical and chemical processes being considered. L95-T and L95-GRS are specifically designed and useful for testing advanced data assimilation schemes, such as the iterative ensemble Kalman smoother (IEnKS) which combines the best of ensemble and variational methods. These models provide useful insights prior to the implementation of data assimilation methods on larger models. We illustrate their use with data assimilation schemes on preliminary, yet instructive numerical experiments. In particular, online and offline data assimilation strategies can be conveniently tested and discussed with this low-order CCMM. The impact of observed chemical species concentrations on the wind field can be quantitatively estimated. The impacts of the wind chaotic dynamics and of the chemical species non-chaotic but highly nonlinear dynamics on the data assimilation strategies are illustrated.
Image ranking in video sequences using pairwise image comparisons and temporal smoothing
CSIR Research Space (South Africa)
Burke, Michael
2016-12-01
Full Text Available The ability to predict the importance of an image is highly desirable in computer vision. This work introduces an image ranking scheme suitable for use in video or image sequences. Pairwise image comparisons are used to determine image ‘interest...
Singh, Simranjit; Kaur, Ramandeep; Singh, Amanvir; Kaler, R. S.
2015-03-01
In this paper, security of the spectrally encoded-optical code division multiplexed access (OCDMA) system is enhanced by using 2-D (orthogonal) modulation technique. This is an effective approach for simultaneous improvement of the system capacity and security. Also, the results show that the hybrid modulation technique proved to be a better option to enhance the data confidentiality at higher data rates using minimum utilization of bandwidth in a multiuser environment. Further, the proposed system performance is compared with the current state-of-the-art OCDMA schemes.
Ranking adverse drug reactions with crowdsourcing.
Gottlieb, Assaf; Hoehndorf, Robert; Dumontier, Michel; Altman, Russ B
2015-03-23
There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. The intent of the study was to rank ADRs according to severity. We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.
Ranking Adverse Drug Reactions With Crowdsourcing
Gottlieb, Assaf
2015-03-23
Background: There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. Objective: The intent of the study was to rank ADRs according to severity. Methods: We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. Results: There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. Conclusions: ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.
Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian
2018-05-01
We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.
Nilsson, Ingemar; Polla, Magnus O
2012-10-01
Drug design is a multi-parameter task present in the analysis of experimental data for synthesized compounds and in the prediction of new compounds with desired properties. This article describes the implementation of a binned scoring and composite ranking scheme for 11 experimental parameters that were identified as key drivers in the MC4R project. The composite ranking scheme was implemented in an AstraZeneca tool for analysis of project data, thereby providing an immediate re-ranking as new experimental data was added. The automated ranking also highlighted compounds overlooked by the project team. The successful implementation of a composite ranking on experimental data led to the development of an equivalent virtual score, which was based on Free-Wilson models of the parameters from the experimental ranking. The individual Free-Wilson models showed good to high predictive power with a correlation coefficient between 0.45 and 0.97 based on the external test set. The virtual ranking adds value to the selection of compounds for synthesis but error propagation must be controlled. The experimental ranking approach adds significant value, is parameter independent and can be tuned and applied to any drug discovery project.
A 3D coarse-mesh time dependent code for nuclear reactor kinetic calculations
International Nuclear Information System (INIS)
Montagnini, B.; Raffaelli, P.; Sumini, M.; Zardini, D.M.
1996-01-01
A course-mesh code for time-dependent multigroup neutron diffusion calculation based on a direct integration scheme for the time dependence and a low order nodal flux expansion approximation for the space variables has been implemented as a fast tool for transient analysis. (Author)
Multitasking the code ARC3D. [for computational fluid dynamics
Barton, John T.; Hsiung, Christopher C.
1986-01-01
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
Risk-ranking IST components into two categories
International Nuclear Information System (INIS)
Rowley, C.W.
1996-01-01
The ASME has utilized several schemes for identifying the appropriate scope of components for inservice testing (IST). The initial scope was ASME Code Class 1/2/3, with all components treated equally. Later the ASME Operations and Maintenance (O ampersand M) Committee decided to use safe shutdown and accident mitigation as the scoping criteria, but continued to treat all components equal inside that scope. Recently the ASME O ampersand M Committee decided to recognize service condition of the component, hence the comprehensive pump test. Although probabilistic risk assessments (PRAs) are incredibly complex plant models and computer hardware and software intensive, they are a tool that can be utilized by many plant engineering organizations to analyze plant system and component applications. In 1992 the ASME O ampersand M Committee got interested in using the PRA as a tool to categorize its pumps and valves. In 1994 the ASME O ampersand M Committee commissioned the ASME Center for Research and Technology Development (CRTD) to develop a process that adapted the PRA technology to IST. In late 1995 that process was presented to the ASME O ampersand M Committee. The process had three distinct portions: (1) risk-rank the IST components; (2) develop a more effective testing strategy for More Safety Significant Components; and (3) develop a more economic testing strategy for Less Safety Significant Components
Risk-ranking IST components into two categories
Energy Technology Data Exchange (ETDEWEB)
Rowley, C.W.
1996-12-01
The ASME has utilized several schemes for identifying the appropriate scope of components for inservice testing (IST). The initial scope was ASME Code Class 1/2/3, with all components treated equally. Later the ASME Operations and Maintenance (O&M) Committee decided to use safe shutdown and accident mitigation as the scoping criteria, but continued to treat all components equal inside that scope. Recently the ASME O&M Committee decided to recognize service condition of the component, hence the comprehensive pump test. Although probabilistic risk assessments (PRAs) are incredibly complex plant models and computer hardware and software intensive, they are a tool that can be utilized by many plant engineering organizations to analyze plant system and component applications. In 1992 the ASME O&M Committee got interested in using the PRA as a tool to categorize its pumps and valves. In 1994 the ASME O&M Committee commissioned the ASME Center for Research and Technology Development (CRTD) to develop a process that adapted the PRA technology to IST. In late 1995 that process was presented to the ASME O&M Committee. The process had three distinct portions: (1) risk-rank the IST components; (2) develop a more effective testing strategy for More Safety Significant Components; and (3) develop a more economic testing strategy for Less Safety Significant Components.
Ranking periodic ordering models on the basis of minimizing total inventory cost
Directory of Open Access Journals (Sweden)
Mohammadali Keramati
2015-06-01
Full Text Available This paper aims to provide proper policies for inventory under uncertain conditions by comparing different inventory policies. To review the efficiency of these algorithms it is necessary to specify the area in which each of them is applied. Therefore, each of the models has been reviewed under different forms of retailing and they are ranked in terms of their expenses. According to the high values of inventories and their impacts on the costs of the companies, the ranking of various models using the simulation annealing algorithm are presented, which indicates that the proposed model of this paper could perform better than other alternative ones. The results also indicate that the suggested algorithm could save from 4 to 29 percent on costs of inventories.
Pole Mass of the W Boson at Two-Loop Order in the Pure $\\overline {MS}$ Scheme
Energy Technology Data Exchange (ETDEWEB)
Martin, Stephen P. [Northern Illinois U.
2015-06-03
I provide a calculation at full two-loop order of the complex pole squared mass of the W boson in the Standard Model in the pure MS¯ renormalization scheme, with Goldstone boson mass effects resummed. This approach is an alternative to earlier ones that use on-shell or hybrid renormalization schemes. The renormalization scale dependence of the real and imaginary parts of the resulting pole mass is studied. Both deviate by about ±4 MeV from their median values as the renormalization scale is varied from 50 to 200 GeV, but the theory error is likely larger. A surprising feature of this scheme is that the two-loop QCD correction has a larger scale dependence, but a smaller magnitude, than the two-loop non-QCD correction, unless the renormalization scale is chosen very far from the top-quark mass.
Flux schemes for the two-fluid models of the trio-U code
International Nuclear Information System (INIS)
Kumbaro, A.; Seignole, V.; Ghidaglia, J.M.
2000-01-01
To solve the non-conservative system of the two-phase flow model in the TRIO-U two-phase flow module, a fully unstructured finite volume formulation is chosen, and the discretization is based on the concept of flux-scheme. Our method allows to determine whether hyperbolicity is necessary to have stable and convergent numerical computations. We discuss the necessity or not to consider all the differential transfer terms between the two-phases in the up-winding of the flux. Numerical results are presented in order to study out the influence of the pressure interface term in the stability, as well as in the up-winding of the flux. (author)
Co-integration Rank Testing under Conditional Heteroskedasticity
DEFF Research Database (Denmark)
Cavaliere, Guiseppe; Rahbæk, Anders; Taylor, A.M. Robert
null distributions of the rank statistics coincide with those derived by previous authors who assume either i.i.d. or (strict and covariance) stationary martingale difference innovations. We then propose wild bootstrap implementations of the co-integrating rank tests and demonstrate that the associated...... bootstrap rank statistics replicate the first-order asymptotic null distributions of the rank statistics. We show the same is also true of the corresponding rank tests based on the i.i.d. bootstrap of Swensen (2006). The wild bootstrap, however, has the important property that, unlike the i.i.d. bootstrap......, it preserves in the re-sampled data the pattern of heteroskedasticity present in the original shocks. Consistent with this, numerical evidence sug- gests that, relative to tests based on the asymptotic critical values or the i.i.d. bootstrap, the wild bootstrap rank tests perform very well in small samples un...
Investigation on the MOC with a linear source approximation scheme in three-dimensional assembly
International Nuclear Information System (INIS)
Zhu, Chenglin; Cao, Xinrong
2014-01-01
Method of characteristics (MOC) for solving neutron transport equation has already become one of the fundamental methods for lattice calculation of nuclear design code system. At present, MOC has three schemes to deal with the neutron source of the transport equation: the flat source approximation of the step characteristics (SC) scheme, the diamond difference (DD) scheme and the linear source (LS) characteristics scheme. The MOC for SC scheme and DD scheme need large storage space and long computing time when they are used to calculate large-scale three-dimensional neutron transport problems. In this paper, a LS scheme and its correction for negative source distribution were developed and added to DRAGON code. This new scheme was compared with the SC scheme and DD scheme which had been applied in this code. As an open source code, DRAGON could solve three-dimensional assembly with MOC method. Detailed calculation is conducted on two-dimensional VVER-1000 assembly under three schemes of MOC. The numerical results indicate that coarse mesh could be used in the LS scheme with the same accuracy. And the LS scheme applied in DRAGON is effective and expected results are achieved. Then three-dimensional cell problem and VVER-1000 assembly are calculated with LS scheme and SC scheme. The results show that less memory and shorter computational time are employed in LS scheme compared with SC scheme. It is concluded that by using LS scheme, DRAGON is able to calculate large-scale three-dimensional problems with less storage space and shorter computing time
Capacity-Approaching Superposition Coding for Optical Fiber Links
DEFF Research Database (Denmark)
Estaran Tolosa, Jose Manuel; Zibar, Darko; Tafur Monroy, Idelfonso
2014-01-01
We report on the first experimental demonstration of superposition coded modulation (SCM) for polarization-multiplexed coherent-detection optical fiber links. The proposed coded modulation scheme is combined with phase-shifted bit-to-symbol mapping (PSM) in order to achieve geometric and passive......-SCM) is employed in the framework of bit-interleaved coded modulation with iterative decoding (BICM-ID) for forward error correction. The fiber transmission system is characterized in terms of signal-to-noise ratio for back-to-back case and correlated with simulated results for ideal transmission over additive...... white Gaussian noise channel. Thereafter, successful demodulation and decoding after dispersion-unmanaged transmission over 240-km standard single mode fiber of dual-polarization 6-Gbaud 16-, 32- and 64-ary SCM-PSM is experimentally demonstrated....
The genetic code as a periodic table: algebraic aspects.
Bashford, J D; Jarvis, P D
2000-01-01
The systematics of indices of physico-chemical properties of codons and amino acids across the genetic code are examined. Using a simple numerical labelling scheme for nucleic acid bases, A=(-1,0), C=(0,-1), G=(0,1), U=(1,0), data can be fitted as low order polynomials of the six coordinates in the 64-dimensional codon weight space. The work confirms and extends the recent studies by Siemion et al. (1995. BioSystems 36, 231-238) of the conformational parameters. Fundamental patterns in the data such as codon periodicities, and related harmonics and reflection symmetries, are here associated with the structure of the set of basis monomials chosen for fitting. Results are plotted using the Siemion one-step mutation ring scheme, and variants thereof. The connections between the present work, and recent studies of the genetic code structure using dynamical symmetry algebras, are pointed out.
A high-order solver for aerodynamic flow simulations and comparison of different numerical schemes
Mikhaylov, Sergey; Morozov, Alexander; Podaruev, Vladimir; Troshin, Alexey
2017-11-01
An implementation of high order of accuracy Discontinuous Galerkin method is presented. Reconstruction is done for the conservative variables. Gradients are calculated using the BR2 method. Coordinate transformations are done by serendipity elements. In computations with schemes of order higher than 2, curvature of the mesh lines is taken into account. A comparison with finite volume methods is performed, including WENO method with linear weights and single quadrature point on a cell side. The results of the following classical tests are presented: subsonic flow around a circular cylinder in an ideal gas, convection of two-dimensional isentropic vortex, and decay of the Taylor-Green vortex.
Quantum Communication Scheme Using Non-symmetric Quantum Channel
International Nuclear Information System (INIS)
Cao Haijing; Chen Zhonghua; Song Heshan
2008-01-01
A theoretical quantum communication scheme based on entanglement swapping and superdense coding is proposed with a 3-dimensional Bell state and 2-dimensional Bell state function as quantum channel. quantum key distribution and quantum secure direct communication can be simultaneously accomplished in the scheme. The scheme is secure and has high source capacity. At last, we generalize the quantum communication scheme to d-dimensional quantum channel
International Nuclear Information System (INIS)
Clarisse, J.M.
2007-01-01
A numerical scheme for computing linear Lagrangian perturbations of spherically symmetric flows of gas dynamics is proposed. This explicit first-order scheme uses the Roe method in Lagrangian coordinates, for computing the radial spherically symmetric mean flow, and its linearized version, for treating the three-dimensional linear perturbations. Fulfillment of the geometric conservation law discrete formulations for both the mean flow and its perturbation is ensured. This scheme capabilities are illustrated by the computation of free-surface mode evolutions at the boundaries of a spherical hollow shell undergoing an homogeneous cumulative compression, showing excellent agreement with reference results. (author)
Setiawan, B B
2002-01-01
The settlement along the bank of the Code River in Yogyakarta, Indonesia provides housing for a large mass of the city's poor. Its strategic location and the fact that most urban poor do not have access to land, attracts people to "illegally" settle along the bank of the river. This brings negative consequences for the environment, particularly the increasing domestic waste along the river and the annual flooding in the rainy season. While the public controversies regarding the existence of the settlement along the Code River were still not resolved, at the end of the 1980s, a group of architects, academics and community members proposed the idea of constructing a dike along the River as part of a broader settlement improvement program. From 1991 to 1998, thousands of local people mobilized their resources and were able to construct 6,000 metres of riverside dike along the Code River. The construction of the riverside dike along the River has become an important "stimulant" that generated not only settlement improvement, but also a better treatment of river water. As all housing units located along the River are now facing the River, the River itself is considered the "front-yard". Before the dike was constructed, the inhabitants used to treat the River as the "backyard" and therefore just throw waste into the River. They now really want to have a cleaner river, since the River is an important part of their settlement. The settlement along the Code River presents a complex range of persistent problems with informal settlements in Indonesia; such problems are related to the issues of how to provide more affordable and adequate housing for the poor, while at the same time, to improve the water quality of the river. The project represents a good case, which shows that through a mutual partnership among stakeholders, it is possible to integrate environmental goals into urban redevelopment schemes.
A Ranking Method for Evaluating Constructed Responses
Attali, Yigal
2014-01-01
This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…
Design of ACM system based on non-greedy punctured LDPC codes
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
PageRank, HITS and a unified framework for link analysis
Energy Technology Data Exchange (ETDEWEB)
Ding, Chris; He, Xiaofeng; Husbands, Parry; Zha, Hongyuan; Simon, Horst
2001-10-01
Two popular webpage ranking algorithms are HITS and PageRank. HITS emphasizes mutual reinforcement between authority and hub webpages, while PageRank emphasizes hyperlink weight normalization and web surfing based on random walk models. We systematically generalize/combine these concepts into a unified framework. The ranking framework contains a large algorithm space; HITS and PageRank are two extreme ends in this space. We study several normalized ranking algorithms which are intermediate between HITS and PageRank, and obtain closed-form solutions. We show that, to first order approximation, all ranking algorithms in this framework, including PageRank and HITS, lead to same ranking which is highly correlated with ranking by indegree. These results support the notion that in web resource ranking indegree and outdegree are of fundamental importance. Rankings of webgraphs of different sizes and queries are presented to illustrate our analysis.
BANDWIDTH AND EFFICIENT ENCODING SCHEME COMBINING TCM-UGM TO STBC
ABDELMOUNAIM MOULAY LAKHDAR; MOHAMMED BELADGHAM; ABDESSELAM BASSOU,; MOHAMED BENAISSA
2011-01-01
In this paper, a bandwidth efficient encoding scheme is proposed. It combines the modified version of trellis coded-modulation (called trellis coded-modulation with Ungerboeck-Gray mapping, TCM-UGM) to space-time block code (STBC). The performance of this encoding scheme is investigated over memoryless Rayleigh fading (MRF) channel for throughput 2 bits/s/Hz. The simulation result, using 2/3 rate 16-state TCM-UGM encoder, two transmit antennas and two receive antennas, shows clearly that the ...
Ren, Danping; Wu, Shanshan; Zhang, Lijing
2016-09-01
In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.
Ranking Performance Measures in Multi-Task Agencies
DEFF Research Database (Denmark)
Christensen, Peter Ove; Sabac, Florin; Tian, Joyce
2010-01-01
We derive sufficient conditions for ranking performance evaluation systems in multi-task agency models (using both optimal and linear contracts) in terms of a second-order stochastic dominance (SSD) condition on the likelihood ratios. The SSD condition can be replaced by a variance-covariance mat......We derive sufficient conditions for ranking performance evaluation systems in multi-task agency models (using both optimal and linear contracts) in terms of a second-order stochastic dominance (SSD) condition on the likelihood ratios. The SSD condition can be replaced by a variance...
THE McELIECE CRYPTOSYSTEM WITH ARRAY CODES
Directory of Open Access Journals (Sweden)
Vedat Şiap
2011-12-01
Full Text Available Public-key cryptosystems form an important part of cryptography. In these systems, every user has a public and a private key. The public key allows other users to encrypt messages, which can only be decoded using the secret private key. In that way, public-key cryptosystems allow easy and secure communication between all users without the need to actually meet and exchange keys. One such system is the McEliece Public-Key cryptosystem, sometimes also called McEliece Scheme. However, as we live in the information age, coding is used in order to protecet or correct the messages in the transferring or the storing processes. So, linear codes are important in the transferring or the storing. Due to richness of their structure array codes which are linear are also an important codes. However, the information is then transferred into the source more securely by increasing the error correction capability with array codes. In this paper, we combine two interesting topics, McEliece cryptosystem and array codes.
Bi-level image compression with tree coding
DEFF Research Database (Denmark)
Martins, Bo; Forchhammer, Søren
1996-01-01
Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...
Yuan, Hao; Zhang, Qin; Hong, Liang; Yin, Wen-jie; Xu, Dong
2014-08-01
We present a novel scheme for deterministic secure quantum communication (DSQC) over collective rotating noisy channel. Four special two-qubit states are found can constitute a noise-free subspaces, and so are utilized as quantum information carriers. In this scheme, the information carriers transmite over the quantum channel only one time, which can effectively reduce the influence of other noise existing in quantum channel. The information receiver need only perform two single-photon collective measurements to decode the secret messages, which can make the present scheme more convenient in practical application. It will be showed that our scheme has a relatively high information capacity and intrisic efficiency. Foremostly, the decoy photon pair checking technique and the order rearrangement of photon pairs technique guarantee that the present scheme is unconditionally secure.
Gershenson, Carlos
Studies of rank distributions have been popular for decades, especially since the work of Zipf. For example, if we rank words of a given language by use frequency (most used word in English is 'the', rank 1; second most common word is 'of', rank 2), the distribution can be approximated roughly with a power law. The same applies for cities (most populated city in a country ranks first), earthquakes, metabolism, the Internet, and dozens of other phenomena. We recently proposed ``rank diversity'' to measure how ranks change in time, using the Google Books Ngram dataset. Studying six languages between 1800 and 2009, we found that the rank diversity curves of languages are universal, adjusted with a sigmoid on log-normal scale. We are studying several other datasets (sports, economies, social systems, urban systems, earthquakes, artificial life). Rank diversity seems to be universal, independently of the shape of the rank distribution. I will present our work in progress towards a general description of the features of rank change in time, along with simple models which reproduce it
General data analysis code for TDCR liquid scintillation counting
Energy Technology Data Exchange (ETDEWEB)
Rodrigues, D. [Laboratorio de Metrologia de Radioisotopos, Comision Nacional de Energia Atomica, Buenos Aires (Argentina)], E-mail: drodrigu@cae.cnea.gov.ar; Arenillas, P.; Capoulat, M.E.; Balpardo, C. [Laboratorio de Metrologia de Radioisotopos, Comision Nacional de Energia Atomica, Buenos Aires (Argentina)
2008-06-15
A non-radionuclide-specific computer code to analyze data, calculate detection efficiency and activity in a TDCR system is presented. The program was developed prioritizing flexibility in measuring conditions, parameters and calculation models. It is also intended to be well structured in order to easily replace subroutines which could eventually be improved by the user. It is written in standard FORTRAN language but a graphical interface is also available. Several tests were performed to check the ability of the code to deal with different decay schemes such as H-3, C-14, Fe-55, Mn-54 and Co-60.
General data analysis code for TDCR liquid scintillation counting
International Nuclear Information System (INIS)
Rodrigues, D.; Arenillas, P.; Capoulat, M.E.; Balpardo, C.
2008-01-01
A non-radionuclide-specific computer code to analyze data, calculate detection efficiency and activity in a TDCR system is presented. The program was developed prioritizing flexibility in measuring conditions, parameters and calculation models. It is also intended to be well structured in order to easily replace subroutines which could eventually be improved by the user. It is written in standard FORTRAN language but a graphical interface is also available. Several tests were performed to check the ability of the code to deal with different decay schemes such as H-3, C-14, Fe-55, Mn-54 and Co-60
High-order discrete ordinate transport in hexagonal geometry: A new capability in ERANOS
International Nuclear Information System (INIS)
Le Tellier, R.; Suteau, C.; Fournier, D.; Ruggieri, J.M.
2010-01-01
This paper presents the implementation of an arbitrary order discontinuous Galerkin scheme within the framework of a discrete ordinate solver of the neutron transport equation for nuclear reactor calculations. More precisely, it deals with non-conforming spatial meshes for the 2 D and 3 D modeling of core geometries based on hexagonal assemblies. This work aims at improving the capabilities of the ERANOS code system dedicated to fast reactor analysis and design. Both the angular quadrature and spatial scheme peculiarities for hexagonal geometries are presented. A particular focus is set on the spatial non-conforming mesh and variable order capabilities of this scheme in anticipation to the development of spatial adaptiveness algorithms. These features are illustrated on a 3 D numerical benchmark with comparison to a Monte Carlo reference and a 2 D benchmark that shows the potential of this scheme for both h-and p-adaptation.
An Authenticated Key Agreement Scheme Based on Cyclic Automorphism Subgroups of Random Orders
Directory of Open Access Journals (Sweden)
Yang Jun
2017-01-01
Full Text Available Group-based cryptography is viewed as a modern cryptographic candidate solution to blocking quantum computer attacks, and key exchange protocols on the Internet are one of the primitives to ensure the security of communication. In 2016 Habeeb et al proposed a “textbook” key exchange protocol based on the semidirect product of two groups, which is insecure for use in real-world applications. In this paper, after discarding the unnecessary disguising notion of semidirect product in the protocol, we establish a simplified yet enhanced authenticated key agreement scheme based on cyclic automorphism subgroups of random orders by making hybrid use of certificates and symmetric-key encryption as challenge-and-responses in the public-key setting. Its passive security is formally analyzed, which is relative to the cryptographic hardness assumption of a computational number-theoretic problem. Cryptanalysis of this scheme shows that it is secure against the intruder-in-the-middle attack even in the worst case of compromising the signatures, and provides explicit key confirmation to both parties.
Energy Technology Data Exchange (ETDEWEB)
Amano, Takanobu, E-mail: amano@eps.s.u-tokyo.ac.jp [Department of Earth and Planetary Science, University of Tokyo, 113-0033 (Japan)
2016-11-01
A new multidimensional simulation code for relativistic two-fluid electrodynamics (RTFED) is described. The basic equations consist of the full set of Maxwell’s equations coupled with relativistic hydrodynamic equations for separate two charged fluids, representing the dynamics of either an electron–positron or an electron–proton plasma. It can be recognized as an extension of conventional relativistic magnetohydrodynamics (RMHD). Finite resistivity may be introduced as a friction between the two species, which reduces to resistive RMHD in the long wavelength limit without suffering from a singularity at infinite conductivity. A numerical scheme based on HLL (Harten–Lax–Van Leer) Riemann solver is proposed that exactly preserves the two divergence constraints for Maxwell’s equations simultaneously. Several benchmark problems demonstrate that it is capable of describing RMHD shocks/discontinuities at long wavelength limit, as well as dispersive characteristics due to the two-fluid effect appearing at small scales. This shows that the RTFED model is a promising tool for high energy astrophysics application.
An Eulerian transport-dispersion model of passive effluents: the Difeul code
International Nuclear Information System (INIS)
Wendum, D.
1994-11-01
R and D has decided to develop an Eulerian diffusion model easy to adapt to meteorological data coming from different sources: for instance the ARPEGE code of Meteo-France or the MERCURE code of EDF. We demand this in order to be able to apply the code in independent cases: a posteriori studies of accidental releases from nuclear power plants ar large or medium scale, simulation of urban pollution episodes within the ''Reactive Atmospheric Flows'' research project. For simplicity reasons, the numerical formulation of our code is the same as the one used in Meteo-France's MEDIA model. The numerical tests presented in this report show the good performance of those schemes. In order to illustrate the method by a concrete example a fictitious release from Saint-Laurent has been simulated at national scale: the results of this simulation agree quite well with those of the trajectory model DIFTRA. (author). 6 figs., 4 tabs
Wavelength-Hopping Time-Spreading Optical CDMA With Bipolar Codes
Kwong, Wing C.; Yang, Guu-Chang; Chang, Cheng-Yuan
2005-01-01
Two-dimensional wavelength-hopping time-spreading coding schemes have been studied recently for supporting greater numbers of subscribers and simultaneous users than conventional one-dimensional approaches in optical code-division multiple-access (OCDMA) systems. To further improve both numbers without sacrificing performance, a new code design utilizing bipolar codes for both wavelength hopping and time spreading is studied and analyzed in this paper. A rapidly programmable, integratable hardware design for this new coding scheme, based on arrayed-waveguide gratings, is also discussed.
International Nuclear Information System (INIS)
Kim, Seung Hwan; Park, Jin Kyun
2009-01-01
Since communication is an important means to exchange information between individuals/teams or auxiliary means to share resources and information given in the team and group activity, effective communication is the prerequisite for construct powerful teamwork by a sharing mental model. Therefore, unless communication is performed efficiently, the quality of task and performance of team lower. Furthermore, since communication is highly related to situation awareness during team activities, inappropriate communication causes a lack of situation awareness and tension and stress are intensified and errors are increased. According to lesson learned from several accidents that have actually occurred in nuclear power plant (NPP), consequence of accident leads most critical results and is more dangerous than those of other industries. In order to improve operator's cope ability and operation ability through simulation training with various off-normal condition, the operation groups are trained regularly every 6 months in the training center of reference NPP. The objective of this study is to suggest modified speech act coding scheme and to elucidate the communication pattern characteristics of an operator's conversation during an abnormal situation in NPP
Personality traits in old age: measurement and rank-order stability and some mean-level change.
Mõttus, René; Johnson, Wendy; Deary, Ian J
2012-03-01
Lothian Birth Cohorts, 1936 and 1921 were used to study the longitudinal comparability of Five-Factor Model (McCrae & John, 1992) personality traits from ages 69 to 72 years and from ages 81 to 87 years, and cross-cohort comparability between ages 69 and 81 years. Personality was measured using the 50-item International Personality Item Pool (Goldberg, 1999). Satisfactory measurement invariance was established across time and cohorts. High rank-order stability was observed in both cohorts. Almost no mean-level change was observed in the younger cohort, whereas Extraversion, Agreeableness, Conscientiousness, and Intellect declined significantly in the older cohort. The older cohort scored higher on Agreeableness and Conscientiousness. In these cohorts, individual differences in personality traits continued to be stable even in very old age, mean-level changes accelerated.
LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.
Djordjevic, Ivan B; Arabaci, Murat
2010-11-22
An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.
A Novel Approach for Identification and Ranking of Road Traffic Accident Hotspots
Directory of Open Access Journals (Sweden)
Zahran El-Said M.M.
2017-01-01
Full Text Available Road Traffic Accidents (RTA are known to be one of the main causes of fatalities worldwide. One usef ul approach to improve road safety is through the identification of RT A hotspots along a road, so they can be prioritised and treated. This paper introduces an approach based on Geographical Information System (GI S to identify and prioritise RTA hotspots along a road network using historical RTA data. One particular urban road in Brunei with a historically high rate of RT As, Jalan Gadong, was selected as a case study. Five years of historical RTA data were acquired from the relevant authorities and input into a GIS database. GI S analysis was then used to identify the spatial extension of the RT A hotspots. The RT A hotspots were ranked according to three different schemes: frequency, severity and socio-economic impact of RTAs. A composite ranking scheme was also developed to combine these schemes; this enabled the prioritisation and development of intervention and maintenance programmes of the identified RTA hotspots. A visualisation method of the RTA spatial distribution within each identified RTA hotspot was also developed to determine the most risky road stretches within each hotspot, which is important for treatment prioritisation when limited resources are available.
User's manual for the G.T.M.-1 computer code
International Nuclear Information System (INIS)
Prado-Herrero, P.
1992-01-01
This document describes the GTM-1 ( Geosphere Transport Model, release-1) computer code and is intended to provide the reader with enough detailed information in order to use the code. GTM-1 was developed for the assessment of radionuclide migration by the ground water through geologic deposits whose properties can change along the pathway.GTM-1 solves the transport equation by the finite differences method ( Crank-Nicolson scheme ). It was developped for specific use within Probabilistic System Assessment (PSA) Monte Carlo Method codes; in this context the first application of GTM-1 was within the LISA (Long Term Isolation System Assessment) code. GTM-1 is also available as an independent model, which includes various submodels simulating a multi-barrier disposal system. The code has been tested with the PSACOIN ( Probabilistic System Assessment Codes intercomparison) benchmarks exercises from PSAC User Group (OECD/NEA). 10 refs., 6 Annex., 2 tabs
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
Analysis of visual coding variables on CRT generated displays
International Nuclear Information System (INIS)
Blackman, H.S.; Gilmore, W.E.
1985-01-01
Cathode ray tube generated safety parameter display systems in a nuclear power plant control room situation have been found to be improved in effectiveness when color coding is employed. Research has indicated strong support for graphic coding techniques particularly in redundant coding schemes. In addition, findings on pictographs, as applied in coding schemes, indicate the need for careful application and for further research in the development of a standardized set of symbols
A. Garba, Aminata
2017-01-01
This paper presents a new approach to optical Code Division Multiple Access (CDMA) network transmission scheme using alternated amplitude sequences and energy differentiation at the transmitters to allow concurrent and secure transmission of several signals. The proposed system uses error control encoding and soft-decision demodulation to reduce the multi-user interference at the receivers. The design of the proposed alternated amplitude sequences, the OCDMA energy modulators and the soft decision, single-user demodulators are also presented. Simulation results show that the proposed scheme allows achieving spectral efficiencies higher than several reported results for optical CDMA and much higher than the Gaussian CDMA capacity limit.
3D code for simulations of fluid flows
International Nuclear Information System (INIS)
Skandera, D.
2004-01-01
In this paper, a present status in the development of the new numerical code is reported. The code is considered for simulations of fluid flows. The finite volume approach is adopted for solving standard fluid equations. They are treated in a conservative form to ensure a correct conservation of fluid quantities. Thus, a nonlinear hyperbolic system of conservation laws is numerically solved. The code uses the Eulerian description of the fluid and is designed as a high order central numerical scheme. The central approach employs no (approximate) Riemann solver and is less computational expensive. The high order WENO strategy is adopted in the reconstruction step to achieve results comparable with more accurate Riemann solvers. A combination of the central approach with an iterative solving of a local Riemann problem is tested and behaviour of such numerical flux is reported. An extension to three dimensions is implemented using a dimension by dimension approach, hence, no complicated dimensional splitting need to be introduced. The code is fully parallelized with the MPI library. Several standard hydrodynamic tests in one, two and three dimensions were performed and their results are presented. (author)
A micro-hydrology computation ordering algorithm
Croley, Thomas E.
1980-11-01
Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies.
A micro-hydrology computation ordering algorithm
International Nuclear Information System (INIS)
Croley, T.E. II
1980-01-01
Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented node definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing micro-hydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies. (orig.)
Directory of Open Access Journals (Sweden)
Alina BOGOI
2016-12-01
Full Text Available Supersonic/hypersonic flows with strong shocks need special treatment in Computational Fluid Dynamics (CFD in order to accurately capture the discontinuity location and his magnitude. To avoid numerical instabilities in the presence of discontinuities, the numerical schemes must generate low dissipation and low dispersion error. Consequently, the algorithms used to calculate the time and space-derivatives, should exhibit a low amplitude and phase error. This paper focuses on the comparison of the numerical results obtained by simulations with some high resolution numerical schemes applied on linear and non-linear one-dimensional conservation low. The analytical solutions are provided for all benchmark tests considering smooth periodical conditions. All the schemes converge to the proper weak solution for linear flux and smooth initial conditions. However, when the flux is non-linear, the discontinuities may develop from smooth initial conditions and the shock must be correctly captured. All the schemes accurately identify the shock position, with the price of the numerical oscillation in the vicinity of the sudden variation. We believe that the identification of this pure numerical behavior, without physical relevance, in 1D case is extremely useful to avoid problems related to the stability and convergence of the solution in the general 3D case.
Spread-spectrum communication using binary spatiotemporal chaotic codes
International Nuclear Information System (INIS)
Wang Xingang; Zhan Meng; Gong Xiaofeng; Lai, C.H.; Lai, Y.-C.
2005-01-01
We propose a scheme to generate binary code for baseband spread-spectrum communication by using a chain of coupled chaotic maps. We compare the performances of this type of spatiotemporal chaotic code with those of a conventional code used frequently in digital communication, the Gold code, and demonstrate that our code is comparable or even superior to the Gold code in several key aspects: security, bit error rate, code generation speed, and the number of possible code sequences. As the field of communicating with chaos faces doubts in terms of performance comparison with conventional digital communication schemes, our work gives a clear message that communicating with chaos can be advantageous and it deserves further attention from the nonlinear science community
Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.
2018-03-01
In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.
International Nuclear Information System (INIS)
Estiot, J.C.; Salvatores, M.; Palmiotti, G.
1981-01-01
We present the characteristics of SAMPO, a one dimension transport theory code system, which is used for the following types of calculation: sensitivity analysis for functional linear or bi-linear on the direct or adjoint flux and their ratios; classic perturbation analysis. First order calculations, as well higher order, can be presented
Development and Application of a Plant Code to the Analysis of Transients in Integrated Reactors
International Nuclear Information System (INIS)
Rabiti, A.; Gimenez, M.; Delmastro, D.; Zanocco, P.
2003-01-01
In this work, a secondary system model for a CAREM-25 type nuclear power plant was developed.A two-phase flow homogenous model was used and found adequate for the scope of the present work.A finite difference scheme was used for the numerical implementation of the model.This model was coupled to the HUARPE code, a primary circuit code, in order to obtain a plant code.This plant code was used to analyze the inherent response of the system, without control feedback loops, for a transient of steam generator feed-water mass flow reduction.The results obtained are satisfactory, but a validation against other plant codes is still necessary
A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering
Directory of Open Access Journals (Sweden)
Yubao Sun
2015-01-01
Full Text Available This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.
Transport synthetic acceleration scheme for multi-dimensional neutron transport problems
Energy Technology Data Exchange (ETDEWEB)
Modak, R S; Kumar, Vinod; Menon, S V.G. [Theoretical Physics Div., Bhabha Atomic Research Centre, Mumbai (India); Gupta, Anurag [Reactor Physics Design Div., Bhabha Atomic Research Centre, Mumbai (India)
2005-09-15
The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)
Transport synthetic acceleration scheme for multi-dimensional neutron transport problems
International Nuclear Information System (INIS)
Modak, R.S.; Vinod Kumar; Menon, S.V.G.; Gupta, Anurag
2005-09-01
The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)
DEFF Research Database (Denmark)
Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela
2013-01-01
of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...
Adaptive transmission schemes for MISO spectrum sharing systems: Tradeoffs and performance analysis
Bouida, Zied
2014-10-01
In this paper, we propose a number of adaptive transmission techniques in order to improve the performance of the secondary link in a spectrum sharing system. We first introduce the concept of minimum-selection maximum ratio transmission (MS-MRT) as an adaptive variation of the existing MRT (MRT) technique. While in MRT all available antennas are used for transmission, MS-MRT uses the minimum subset of antennas verifying both the interference constraint (IC) to the primary user and the bit error rate (BER) requirements. Similar to MRT, MS-MRT assumes that perfect channel state information (CSI) is available at the secondary transmitter (ST), which makes this scheme challenging from a practical point of view. To overcome this challenge, we propose another transmission technique based on orthogonal space-time block codes with transmit antenna selection (TAS). This technique uses the full-rate full-diversity Alamouti scheme in order to maximize the secondary\\'s transmission rate. The performance of these techniques is analyzed in terms of the average spectral efficiency (ASE), average number of transmit antennas, average delay, average BER, and outage performance. In order to give the motivation behind these analytical results, the tradeoffs offered by the proposed schemes are summarized and then demonstrated through several numerical examples.
Chen, Fujun; Feng, Gang; Zhang, Siwei
2016-10-01
The triple-branch signal detection (TBSD) scheme can eliminate multiple-user interference (MUI) without fixed in-phase cross-correlation (IPCC) stipulation in the spectral-amplitude-coding optical code division multiple access (SACOCDMA) systems. In this paper, we modify the traditional TBSD scheme and theoretically analyze the principle of the MUI elimination. Then, the bit-error-rate (BER) performance of the modified TBSD scheme is investigated under multiple transmission rates. The results show that the modified TBSD employing the codes with unfixed IPCC can achieve simultaneous optical code recognition and MUI elimination in the SAC-OCDMA.
Transform coding for hardware-accelerated volume rendering.
Fout, Nathaniel; Ma, Kwan-Liu
2007-01-01
Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.
Unequal error control scheme for dimmable visible light communication systems
Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan
2017-01-01
Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.
Decomposition of the Google PageRank and Optimal Linking Strategy
Avrachenkov, Konstatin; Litvak, Nelli
We provide the analysis of the Google PageRank from the perspective of the Markov Chain Theory. First we study the Google PageRank for a Web that can be decomposed into several connected components which do not have any links to each other. We show that in order to determine the Google PageRank for
Limits of rank 4 Azumaya algebras and applications to desingularisation
International Nuclear Information System (INIS)
Venkata Balaji, T.E.
2001-07-01
A smooth scheme structure on the space of limits of Azumaya algebra structures on a free rank 4 module over any noetherian commutative ring is shown to exist, generalizing Seshadri's theorem in that the variety of specialisations of (2x2)-matrix algebras is smooth in characteristic ≠2. As an application, a construction of Seshadri is shown in a characteristic-free way to desingularise the moduli space of rank two even degree semistable vector bundles on a complete curve. As another application, a construction of Nori over Z is extended to the case of a normal domain which is finitely generate algebra over a universally Japanese (Nagata) ring and is shown to desingularise the Artin moduli space of invariants of several matrices in rank 2. This desingularisation is shown to have a good specialisation property if the Artin moduli space has geometrically reduced fibers, for example, this happens over Z. Essential use is made of M. Kneser's concept of 'semiregular quadratic module'. For any free quadratic module of odd rank, a formula linking the half-discriminant and the values of the quadratic form on its radical is derived. (author)
Asymptotically stable fourth-order accurate schemes for the diffusion equation on complex shapes
International Nuclear Information System (INIS)
Abarbanel, S.; Ditkowski, A.
1997-01-01
An algorithm which solves the multidimensional diffusion equation on complex shapes to fourth-order accuracy and is asymptotically stable in time is presented. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty-like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions, fail. The ability of the paradigm to be applied to arbitrary geometric domains is an important feature of the algorithm. 5 refs., 14 figs
Differential Space-Time Block Code Modulation for DS-CDMA Systems
Directory of Open Access Journals (Sweden)
Liu Jianhua
2002-01-01
Full Text Available A differential space-time block code (DSTBC modulation scheme is used to improve the performance of DS-CDMA systems in fast time-dispersive fading channels. The resulting scheme is referred to as the differential space-time block code modulation for DS-CDMA (DSTBC-CDMA systems. The new modulation and demodulation schemes are especially studied for the down-link transmission of DS-CDMA systems. We present three demodulation schemes, referred to as the differential space-time block code Rake (D-Rake receiver, differential space-time block code deterministic (D-Det receiver, and differential space-time block code deterministic de-prefix (D-Det-DP receiver, respectively. The D-Det receiver exploits the known information of the spreading sequences and their delayed paths deterministically besides the Rake type combination; consequently, it can outperform the D-Rake receiver, which employs the Rake type combination only. The D-Det-DP receiver avoids the effect of intersymbol interference and hence can offer better performance than the D-Det receiver.
An Orbit And Dispersion Correction Scheme for the PEP II
International Nuclear Information System (INIS)
Cai, Y.; Donald, M.; Shoaee, H.; White, G.; Yasukawa, L.A.
2011-01-01
To achieve optimum luminosity in a storage ring it is vital to control the residual vertical dispersion. In the original PEP storage ring, a scheme to control the residual dispersion function was implemented using the ring orbit as the controlling element. The 'best' orbit not necessarily giving the lowest vertical dispersion. A similar scheme has been implemented in both the on-line control code and in the simulation code LEGO. The method involves finding the response matrices (sensitivity of orbit/dispersion at each Beam-Position-Monitor (BPM) to each orbit corrector) and solving in a least squares sense for minimum orbit, dispersion function or both. The optimum solution is usually a subset of the full least squares solution. A scheme of simultaneously correcting the orbits and dispersion has been implemented in the simulation code and on-line control system for PEP-II. The scheme is based on the eigenvector decomposition method. An important ingredient of the scheme is to choose the optimum eigenvectors that minimize the orbit, dispersion and corrector strength. Simulations indicate this to be a very effective way to control the vertical residual dispersion.
A self-organized internal models architecture for coding sensory-motor schemes
Directory of Open Access Journals (Sweden)
Esaú eEscobar Juárez
2016-04-01
Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Construction of Capacity Achieving Lattice Gaussian Codes
Alghamdi, Wael
2016-04-01
We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].
Scalable Faceted Ranking in Tagging Systems
Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.
Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.
An Adaptive Coding Scheme For Effective Bandwidth And Power ...
African Journals Online (AJOL)
Codes for communication channels are in most cases chosen on the basis of the signal to noise ratio expected on a given transmission channel. The worst possible noise condition is normally assumed in the choice of appropriate codes such that a specified minimum error shall result during transmission on the channel.
Building Secure Public Key Encryption Scheme from Hidden Field Equations
Directory of Open Access Journals (Sweden)
Yuan Ping
2017-01-01
Full Text Available Multivariate public key cryptography is a set of cryptographic schemes built from the NP-hardness of solving quadratic equations over finite fields, amongst which the hidden field equations (HFE family of schemes remain the most famous. However, the original HFE scheme was insecure, and the follow-up modifications were shown to be still vulnerable to attacks. In this paper, we propose a new variant of the HFE scheme by considering the special equation x2=x defined over the finite field F3 when x=0,1. We observe that the equation can be used to further destroy the special structure of the underlying central map of the HFE scheme. It is shown that the proposed public key encryption scheme is secure against known attacks including the MinRank attack, the algebraic attacks, and the linearization equations attacks. The proposal gains some advantages over the original HFE scheme with respect to the encryption speed and public key size.
Prasertwattana, Kanit; Shimizu, Yoshiaki; Chiadamrong, Navee
This paper studied the material ordering and inventory control of supply chain systems. The effect of controlling policies is analyzed under three different configurations of the supply chain systems, and the formulated problem has been solved by using an evolutional optimization method known as Differential Evolution (DE). The numerical results show that the coordinating policy with the incentive scheme outperforms the other policies and can improve the performance of the overall system as well as all members under the concept of supply chain management.
Vandierendonck, André
2016-01-01
Working memory researchers do not agree on whether order in serial recall is encoded by dedicated modality-specific systems or by a more general modality-independent system. Although previous research supports the existence of autonomous modality-specific systems, it has been shown that serial recognition memory is prone to cross-modal order interference by concurrent tasks. The present study used a serial recall task, which was performed in a single-task condition and in a dual-task condition with an embedded memory task in the retention interval. The modality of the serial task was either verbal or visuospatial, and the embedded tasks were in the other modality and required either serial or item recall. Care was taken to avoid modality overlaps during presentation and recall. In Experiment 1, visuospatial but not verbal serial recall was more impaired when the embedded task was an order than when it was an item task. Using a more difficult verbal serial recall task, verbal serial recall was also more impaired by another order recall task in Experiment 2. These findings are consistent with the hypothesis of modality-independent order coding. The implications for views on short-term recall and the multicomponent view of working memory are discussed.
Khan, Haseeb Ahmad
2004-01-01
The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann-Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n < or = 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform.
Wolff, Hans-Georg; Preising, Katja
2005-02-01
To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development. The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/.
International Nuclear Information System (INIS)
Askew, J.
1981-01-01
WIMS-D4 is the latest version of the original form of the Winfrith Improved Multigroup Scheme, developed in 1963-5 for lattice calculations on all types of thermal reactor, whether moderated by graphite, heavy or light water. The code, in earlier versions, has been available from the NEA code centre for a number of years in both IBM and CDC dialects of FORTRAN. An important feature of this code was its rapid, accurate deterministic system for treating resonance capture in heavy nuclides, and capable of dealing with both regular pin lattices and with cluster geometries typical of pressure tube and gas cooled reactors. WIMS-E is a compatible code scheme in which each calcultation step is bounded by standard interfaces on disc or tape. The interfaces contain files of information in a standard form, restricted to numbers representing physically meaningful quantities such as cross-sections and fluxes. Restriction of code intercommunication to this channel limits the possible propagation of errors. A module is capable of transforming WIMS-D output into the standard interface form and hence the two schemes can be linked if required. LWR-WIMS was developed in 1970 as a method of calculating LWR reloads for the fuel fabricators BNFL/GUNF. It uses the WIMS-E library and a number of the same module
A Suboptimal Scheme for Multi-User Scheduling in Gaussian Broadcast Channels
Zafar, Ammar; Alouini, Mohamed-Slim; Shaqfeh, Mohammad
2014-01-01
This work proposes a suboptimal multi-user scheduling scheme for Gaussian broadcast channels which improves upon the classical single user selection, while considerably reducing complexity as compared to the optimal superposition coding with successful interference cancellation. The proposed scheme combines the two users with the maximum weighted instantaneous rate using superposition coding. The instantaneous rate and power allocation are derived in closed-form, while the long term rate of each user is derived in integral form for all channel distributions. Numerical results are then provided to characterize the prospected gains of the proposed scheme.
A Suboptimal Scheme for Multi-User Scheduling in Gaussian Broadcast Channels
Zafar, Ammar
2014-05-28
This work proposes a suboptimal multi-user scheduling scheme for Gaussian broadcast channels which improves upon the classical single user selection, while considerably reducing complexity as compared to the optimal superposition coding with successful interference cancellation. The proposed scheme combines the two users with the maximum weighted instantaneous rate using superposition coding. The instantaneous rate and power allocation are derived in closed-form, while the long term rate of each user is derived in integral form for all channel distributions. Numerical results are then provided to characterize the prospected gains of the proposed scheme.
Bit-Wise Arithmetic Coding For Compression Of Data
Kiely, Aaron
1996-01-01
Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.
High-order FDTD methods via derivative matching for Maxwell's equations with material interfaces
International Nuclear Information System (INIS)
Zhao Shan; Wei, G.W.
2004-01-01
This paper introduces a series of novel hierarchical implicit derivative matching methods to restore the accuracy of high-order finite-difference time-domain (FDTD) schemes of computational electromagnetics (CEM) with material interfaces in one (1D) and two spatial dimensions (2D). By making use of fictitious points, systematic approaches are proposed to locally enforce the physical jump conditions at material interfaces in a preprocessing stage, to arbitrarily high orders of accuracy in principle. While often limited by numerical instability, orders up to 16 and 12 are achieved, respectively, in 1D and 2D. Detailed stability analyses are presented for the present approach to examine the upper limit in constructing embedded FDTD methods. As natural generalizations of the high-order FDTD schemes, the proposed derivative matching methods automatically reduce to the standard FDTD schemes when the material interfaces are absent. An interesting feature of the present approach is that it encompasses a variety of schemes of different orders in a single code. Another feature of the present approach is that it can be robustly implemented with other high accuracy time-domain approaches, such as the multiresolution time-domain method and the local spectral time-domain method, to cope with material interfaces. Numerical experiments on both 1D and 2D problems are carried out to test the convergence, examine the stability, access the efficiency, and explore the limitation of the proposed methods. It is found that operating at their best capacity, the proposed high-order schemes could be over 2000 times more efficient than their fourth-order versions in 2D. In conclusion, the present work indicates that the proposed hierarchical derivative matching methods might lead to practical high-order schemes for numerical solution of time-domain Maxwell's equations with material interfaces
Balanced distributed coding of omnidirectional images
Thirumalai, Vijayaraghavan; Tosic, Ivana; Frossard, Pascal
2008-01-01
This paper presents a distributed coding scheme for the representation of 3D scenes captured by stereo omni-directional cameras. We consider a scenario where images captured from two different viewpoints are encoded independently, with a balanced rate distribution among the different cameras. The distributed coding is built on multiresolution representation and partitioning of the visual information in each camera. The encoder transmits one partition after entropy coding, as well as the syndrome bits resulting from the channel encoding of the other partition. The decoder exploits the intra-view correlation and attempts to reconstruct the source image by combination of the entropy-coded partition and the syndrome information. At the same time, it exploits the inter-view correlation using motion estimation between images from different cameras. Experiments demonstrate that the distributed coding solution performs better than a scheme where images are handled independently, and that the coding rate stays balanced between encoders.