WorldWideScience

Sample records for barker-code-based spectrum spreading

  1. Ultrasound strain imaging using Barker code

    Science.gov (United States)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  2. Spread-spectrum communication using binary spatiotemporal chaotic codes

    International Nuclear Information System (INIS)

    Wang Xingang; Zhan Meng; Gong Xiaofeng; Lai, C.H.; Lai, Y.-C.

    2005-01-01

    We propose a scheme to generate binary code for baseband spread-spectrum communication by using a chain of coupled chaotic maps. We compare the performances of this type of spatiotemporal chaotic code with those of a conventional code used frequently in digital communication, the Gold code, and demonstrate that our code is comparable or even superior to the Gold code in several key aspects: security, bit error rate, code generation speed, and the number of possible code sequences. As the field of communicating with chaos faces doubts in terms of performance comparison with conventional digital communication schemes, our work gives a clear message that communicating with chaos can be advantageous and it deserves further attention from the nonlinear science community

  3. Short range spread-spectrum radiolocation system and method

    Science.gov (United States)

    Smith, Stephen F.

    2003-04-29

    A short range radiolocation system and associated methods that allow the location of an item, such as equipment, containers, pallets, vehicles, or personnel, within a defined area. A small, battery powered, self-contained tag is provided to an item to be located. The tag includes a spread-spectrum transmitter that transmits a spread-spectrum code and identification information. A plurality of receivers positioned about the area receive signals from a transmitting tag. The position of the tag, and hence the item, is located by triangulation. The system employs three different ranging techniques for providing coarse, intermediate, and fine spatial position resolution. Coarse positioning information is provided by use of direct-sequence code phase transmitted as a spread-spectrum signal. Intermediate positioning information is provided by the use of a difference signal transmitted with the direct-sequence spread-spectrum code. Fine positioning information is provided by use of carrier phase measurements. An algorithm is employed to combine the three data sets to provide accurate location measurements.

  4. Generation and reception of spread-spectrum signals

    Science.gov (United States)

    Moser, R.

    1983-05-01

    The term 'spread-spectrum' implies a technique whereby digitized information is added to a pseudo-random number sequence and the resultant bit stream changes some parameter of the carrier frequency in discrete increments. The discrete modulation of the carrier frequency is usually realized either as a multiple level phase shift keyed or frequency shift keyed signal. The resultant PSK-modulated frequency spectrum is referred to as direct sequence spread-spectrum, whereas the FSK-modulated carrier frequency is referred to as a frequency hopped spread spectrum. These can be considered the major subsets of the more general term 'spread-spectrum'. In discussing signal reception, it is pointed out that active correlation methods are used for channel synchronization when the psuedo random sequences are long or when the processing gain is large, whereas the passive methods may be used for either short pseudo-random noise generation codes or to assist in attaining initial synchronization in long sequence spread-spectrum systems.

  5. Spread spectrum image steganography.

    Science.gov (United States)

    Marvel, L M; Boncelet, C R; Retter, C T

    1999-01-01

    In this paper, we present a new method of digital steganography, entitled spread spectrum image steganography (SSIS). Steganography, which means "covered writing" in Greek, is the science of communicating in a hidden manner. Following a discussion of steganographic communication theory and review of existing techniques, the new method, SSIS, is introduced. This system hides and recovers a message of substantial length within digital imagery while maintaining the original image size and dynamic range. The hidden message can be recovered using appropriate keys without any knowledge of the original image. Image restoration, error-control coding, and techniques similar to spread spectrum are described, and the performance of the system is illustrated. A message embedded by this method can be in the form of text, imagery, or any other digital signal. Applications for such a data-hiding scheme include in-band captioning, covert communication, image tamperproofing, authentication, embedded control, and revision tracking.

  6. Impact of Different Spreading Codes Using FEC on DWT Based MC-CDMA System

    OpenAIRE

    Masum, Saleh; Kabir, M. Hasnat; Islam, Md. Matiqul; Shams, Rifat Ara; Ullah, Shaikh Enayet

    2012-01-01

    The effect of different spreading codes in DWT based MC-CDMA wireless communication system is investigated. In this paper, we present the Bit Error Rate (BER) performance of different spreading codes (Walsh-Hadamard code, Orthogonal gold code and Golay complementary sequences) using Forward Error Correction (FEC) of the proposed system. The data is analyzed and is compared among different spreading codes in both coded and uncoded cases. It is found via computer simulation that the performance...

  7. Spread spectrum mobile communication experiment using ETS-V satellite

    Science.gov (United States)

    Ikegami, Tetsushi; Suzuki, Ryutaro; Kadowaki, Naoto; Taira, Shinichi; Sato, Nobuyasu

    1990-01-01

    The spread spectrum technique is attractive for application to mobile satellite communications, because of its random access capability, immunity to inter-system interference, and robustness to overloading. A novel direct sequence spread spectrum communication equipment is developed for land mobile satellite applications. The equipment is developed based on a matched filter technique to improve the initial acquisition performance. The data rate is 2.4 kilobits per sec. and the PN clock rate is 2.4552 mega-Hz. This equipment also has a function of measuring the multipath delay profile of land mobile satellite channel, making use of a correlation property of a PN code. This paper gives an outline of the equipment and the field test results with ETS-V satellite.

  8. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes

    Science.gov (United States)

    Elhelw, Amr M.; Badran, Ehab F.

    2015-01-01

    High peak to average power ratio (PAPR) is one of the major problems of OFDM systems. Selected mapping (SLM) is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI) index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI) for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate. PMID:26018504

  9. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes.

    Directory of Open Access Journals (Sweden)

    Amr M Elhelw

    Full Text Available High peak to average power ratio (PAPR is one of the major problems of OFDM systems. Selected mapping (SLM is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate.

  10. Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications

    Energy Technology Data Exchange (ETDEWEB)

    Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz

    2016-12-01

    This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledge of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.

  11. Resource Allocation with Adaptive Spread Spectrum OFDM Using 2D Spreading for Power Line Communications

    Science.gov (United States)

    Baudais, Jean-Yves; Crussière, Matthieu

    2007-12-01

    Bit-loading techniques based on orthogonal frequency division mutiplexing (OFDM) are frequently used over wireline channels. In the power line context, channel state information can reasonably be obtained at both transmitter and receiver sides, and adaptive loading can advantageously be carried out. In this paper, we propose to apply loading principles to an spread spectrum OFDM (SS-OFDM) waveform which is a multicarrier system using 2D spreading in the time and frequency domains. The presented algorithm handles the subcarriers, spreading codes, bits and energies assignment in order to maximize the data rate and the range of the communication system. The optimization is realized at a target symbol error rate and under spectral mask constraint as usually imposed. The analytical study shows that the merging principle realized by the spreading code improves the rate and the range of the discrete multitone (DMT) system in single and multiuser contexts. Simulations have been run over measured power line communication (PLC) channel responses and highlight that the proposed system is all the more interesting than the received signal-to-noise ratio (SNR) is low.

  12. Improving performance of DS-CDMA systems using chaotic complex Bernoulli spreading codes

    Science.gov (United States)

    Farzan Sabahi, Mohammad; Dehghanfard, Ali

    2014-12-01

    The most important goal of spreading spectrum communication system is to protect communication signals against interference and exploitation of information by unintended listeners. In fact, low probability of detection and low probability of intercept are two important parameters to increase the performance of the system. In Direct Sequence Code Division Multiple Access (DS-CDMA) systems, these properties are achieved by multiplying the data information in spreading sequences. Chaotic sequences, with their particular properties, have numerous applications in constructing spreading codes. Using one-dimensional Bernoulli chaotic sequence as spreading code is proposed in literature previously. The main feature of this sequence is its negative auto-correlation at lag of 1, which with proper design, leads to increase in efficiency of the communication system based on these codes. On the other hand, employing the complex chaotic sequences as spreading sequence also has been discussed in several papers. In this paper, use of two-dimensional Bernoulli chaotic sequences is proposed as spreading codes. The performance of a multi-user synchronous and asynchronous DS-CDMA system will be evaluated by applying these sequences under Additive White Gaussian Noise (AWGN) and fading channel. Simulation results indicate improvement of the performance in comparison with conventional spreading codes like Gold codes as well as similar complex chaotic spreading sequences. Similar to one-dimensional Bernoulli chaotic sequences, the proposed sequences also have negative auto-correlation. Besides, construction of complex sequences with lower average cross-correlation is possible with the proposed method.

  13. Resource Allocation with Adaptive Spread Spectrum OFDM Using 2D Spreading for Power Line Communications

    Directory of Open Access Journals (Sweden)

    Baudais Jean-Yves

    2007-01-01

    Full Text Available Bit-loading techniques based on orthogonal frequency division mutiplexing (OFDM are frequently used over wireline channels. In the power line context, channel state information can reasonably be obtained at both transmitter and receiver sides, and adaptive loading can advantageously be carried out. In this paper, we propose to apply loading principles to an spread spectrum OFDM (SS-OFDM waveform which is a multicarrier system using 2D spreading in the time and frequency domains. The presented algorithm handles the subcarriers, spreading codes, bits and energies assignment in order to maximize the data rate and the range of the communication system. The optimization is realized at a target symbol error rate and under spectral mask constraint as usually imposed. The analytical study shows that the merging principle realized by the spreading code improves the rate and the range of the discrete multitone (DMT system in single and multiuser contexts. Simulations have been run over measured power line communication (PLC channel responses and highlight that the proposed system is all the more interesting than the received signal-to-noise ratio (SNR is low.

  14. A neutron spectrum unfolding code based on iterative procedures

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    In this work, the version 3.0 of the neutron spectrum unfolding code called Neutron Spectrometry and Dosimetry from Universidad Autonoma de Zacatecas (NSDUAZ), is presented. This code was designed in a graphical interface under the LabVIEW programming environment and it is based on the iterative SPUNIT iterative algorithm, using as entrance data, only the rate counts obtained with 7 Bonner spheres based on a 6 Lil(Eu) neutron detector. The main features of the code are: it is intuitive and friendly to the user; it has a programming routine which automatically selects the initial guess spectrum by using a set of neutron spectra compiled by the International Atomic Energy Agency. Besides the neutron spectrum, this code calculates the total flux, the mean energy, H(10), h(10), 15 dosimetric quantities for radiation protection porpoises and 7 survey meter responses, in four energy grids, based on the International Atomic Energy Agency compilation. This code generates a full report in html format with all relevant information. In this work, the neutron spectrum of a 241 AmBe neutron source on air, located at 150 cm from detector, is unfolded. (Author)

  15. Hybrid spread spectrum radio system

    Science.gov (United States)

    Smith, Stephen F [London, TN; Dress, William B [Camas, WA

    2010-02-09

    Systems and methods are described for hybrid spread spectrum radio systems. A method, includes receiving a hybrid spread spectrum signal including: fast frequency hopping demodulating and direct sequence demodulating a direct sequence spread spectrum signal, wherein multiple frequency hops occur within a single data-bit time and each bit is represented by chip transmissions at multiple frequencies.

  16. Simulation and Comparison Between Slow and Fast FH/BPSK Spread Spectrum Using Matlab

    Directory of Open Access Journals (Sweden)

    Sanaa Said Kadhim

    2018-02-01

    Full Text Available This paper investigates the properties and applications of Frequency Hopping Spread Spectrum (FHSS.  FHSS is radio communication technique by which the sender of information sends the data on a radio channel, which changes the frequency of transmission based on a predetermined sequence of code. The FHSS has many advantages over traditional modulation methods, it can overcome fading, multipath channels and interferences. Hence the interception becomes difficult. This security feature makes FHSS more preferable for  military applications. At the receiver side, the signal is demodulated by the same carrier signal for which frequency changes by the same code sequences used by the sender. This paper presents two types of FHSS, slow and fast. The  simulation procedures of both types were  implemented and applied on   Frequency Hopping /Binary Phase Shift Keying (FH/BPSK spread spectrum system using MATLAB. The simulation sequences for fast and slow frequency hopping is the same in number  and frequencies of spreading carriers and both used BPSK traditional modulation type. The  comparison  results  based on their power spectral density   show that the fast frequency hopping is more resistive to noise the slow one.

  17. David Barker: the revolution that anticipates existence

    Directory of Open Access Journals (Sweden)

    Italo Farnetani

    2014-01-01

    Full Text Available David Barker is the man who “anticipated" the existence of babies by focusing attention on the importance of the fetus and what takes place during intrauterine life. Barker was one of the physicians who in the last decades brought about the greatest changes in medicine, changes so important as to represent a veritable revolution in medical thought. According to Barker's studies, the embryo obviously has a genetic complement coming from the mother and father, but from the very first stages of development it begins to undergo the influence of the outside environment, just as occurs for adults whose biological, psychological and pathological aspects are influenced by the environment to a not well-established percentage between genetic complement and epigenetics. Much of our future lives as adults is decided in our mothers' wombs. If Barker's discovery was revolutionary from the cultural standpoint, it was even more so from the strictly medical one. Barker's research method was rigid from the methodological standpoint, but innovative and speculative in its working hypotheses, with a humanistic slant. Barker's idea has another practical corollary: it is evident that the role of obstetricians, perinatologists and neonatologists is more and more relevant in medicine and future prevention. Unquestionably, besides the enormous merits of his clinical research, among the benefits that Barker has contributed there is that of having helped us to see things from new points of view. Not only is the neonate (and even more so the fetus not an adult of reduced proportions, but perhaps the neonate is the "father" of the adult person.

  18. A Chaos-Based Secure Direct-Sequence/Spread-Spectrum Communication System

    Directory of Open Access Journals (Sweden)

    Nguyen Xuan Quyen

    2013-01-01

    Full Text Available This paper proposes a chaos-based secure direct-sequence/spread-spectrum (DS/SS communication system which is based on a novel combination of the conventional DS/SS and chaos techniques. In the proposed system, bit duration is varied according to a chaotic behavior but is always equal to a multiple of the fixed chip duration in the communication process. Data bits with variable duration are spectrum-spread by multiplying directly with a pseudonoise (PN sequence and then modulated onto a sinusoidal carrier by means of binary phase-shift keying (BPSK. To recover exactly the data bits, the receiver needs an identical regeneration of not only the PN sequence but also the chaotic behavior, and hence data security is improved significantly. Structure and operation of the proposed system are analyzed in detail. Theoretical evaluation of bit-error rate (BER performance in presence of additive white Gaussian noise (AWGN is provided. Parameter choice for different cases of simulation is also considered. Simulation and theoretical results are shown to verify the reliability and feasibility of the proposed system. Security of the proposed system is also discussed.

  19. Wavelength-Hopping Time-Spreading Optical CDMA With Bipolar Codes

    Science.gov (United States)

    Kwong, Wing C.; Yang, Guu-Chang; Chang, Cheng-Yuan

    2005-01-01

    Two-dimensional wavelength-hopping time-spreading coding schemes have been studied recently for supporting greater numbers of subscribers and simultaneous users than conventional one-dimensional approaches in optical code-division multiple-access (OCDMA) systems. To further improve both numbers without sacrificing performance, a new code design utilizing bipolar codes for both wavelength hopping and time spreading is studied and analyzed in this paper. A rapidly programmable, integratable hardware design for this new coding scheme, based on arrayed-waveguide gratings, is also discussed.

  20. A neutron spectrum unfolding computer code based on artificial neural networks

    International Nuclear Information System (INIS)

    Ortiz-Rodríguez, J.M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J.M.; Vega-Carrillo, H.R.

    2014-01-01

    The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6 LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding

  1. Multicarrier Spread Spectrum Modulation Schemes and Efficient FFT Algorithms for Cognitive Radio Systems

    Directory of Open Access Journals (Sweden)

    Mohandass Sundararajan

    2014-07-01

    Full Text Available Spread spectrum (SS and multicarrier modulation (MCM techniques are recognized as potential candidates for the design of underlay and interweave cognitive radio (CR systems, respectively. Direct Sequence Code Division Multiple Access (DS-CDMA is a spread spectrum technique generally used in underlay CR systems. Orthogonal Frequency Division Multiplexing (OFDM is the basic MCM technique, primarily used in interweave CR systems. There are other MCM schemes derived from OFDM technique, like Non-Contiguous OFDM, Spread OFDM, and OFDM-OQAM, which are more suitable for CR systems. Multicarrier Spread Spectrum Modulation (MCSSM schemes like MC-CDMA, MC-DS-CDMA and SS-MC-CDMA, combine DS-CDMA and OFDM techniques in order to improve the CR system performance and adaptability. This article gives a detailed survey of the various spread spectrum and multicarrier modulation schemes proposed in the literature. Fast Fourier Transform (FFT plays a vital role in all the multicarrier modulation techniques. The FFT part of the modem can be used for spectrum sensing. The performance of the FFT operator plays a crucial role in the overall performance of the system. Since the cognitive radio is an adaptive system, the FFT operator must also be adaptive for various input/output values, in order to save energy and time taken for execution. This article also includes the various efficient FFT algorithms proposed in the literature, which are suitable for CR systems.

  2. Coding-Spreading Tradeoff in CDMA Systems

    National Research Council Canada - National Science Library

    Bolas, Eduardo

    2002-01-01

    .... Comparing different combinations of coding and spreading with a traditional DS-CDMA, as defined in the IS-95 standard, allows the criteria to be defined for the best coding-spreading tradeoff in CDMA systems...

  3. A neutron spectrum unfolding computer code based on artificial neural networks

    Science.gov (United States)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2014-02-01

    The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in

  4. Design of Spreading-Codes-Assisted Active Imaging System

    Directory of Open Access Journals (Sweden)

    Alexey Volkov

    2015-07-01

    Full Text Available This work discusses an innovative approach to imaging which can improve the robustness of existing active-range measurement methods and potentially enhance their use in a variety of outdoor applications. By merging a proven modulation technique from the domain of spread-spectrum communications with the bleeding-edge CMOS sensor technology, the prototype of the modulated range sensor is designed and evaluated. A suitable set of application-specific spreading codes is proposed, evaluated and tested on the prototype. Experimental results show that the introduced modulation technique significantly reduces the impacts of environmental factors such as sunlight and external light sources, as well as mutual interference of identical devices. The proposed approach can be considered as a promising basis for a new generation of robust and cost-efficient range-sensing solutions for automotive applications, autonomous vehicles or robots.

  5. Rotated Walsh-Hadamard Spreading with Robust Channel Estimation for a Coded MC-CDMA System

    Directory of Open Access Journals (Sweden)

    Raulefs Ronald

    2004-01-01

    Full Text Available We investigate rotated Walsh-Hadamard spreading matrices for a broadband MC-CDMA system with robust channel estimation in the synchronous downlink. The similarities between rotated spreading and signal space diversity are outlined. In a multiuser MC-CDMA system, possible performance improvements are based on the chosen detector, the channel code, and its Hamming distance. By applying rotated spreading in comparison to a standard Walsh-Hadamard spreading code, a higher throughput can be achieved. As combining the channel code and the spreading code forms a concatenated code, the overall minimum Hamming distance of the concatenated code increases. This asymptotically results in an improvement of the bit error rate for high signal-to-noise ratio. Higher convolutional channel code rates are mostly generated by puncturing good low-rate channel codes. The overall Hamming distance decreases significantly for the punctured channel codes. Higher channel code rates are favorable for MC-CDMA, as MC-CDMA utilizes diversity more efficiently compared to pure OFDMA. The application of rotated spreading in an MC-CDMA system allows exploiting diversity even further. We demonstrate that the rotated spreading gain is still present for a robust pilot-aided channel estimator. In a well-designed system, rotated spreading extends the performance by using a maximum likelihood detector with robust channel estimation at the receiver by about 1 dB.

  6. Design, Implementation, and Evaluation of a Hybrid DS/FFH Spread-Spectrum Radio Transceiver

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Killough, Stephen M [ORNL; Kuruganti, Teja [ORNL; Carroll, Thomas E. [Pacific Northwest National Laboratory (PNNL)

    2014-01-01

    In recent years there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct-sequence modulation with "fast" frequency-hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. In this paper, we present the efforts carried out at Oak Ridge National Laboratory toward exploring the design, implementation, and evaluation of a hybrid DS/FFH spread-spectrum radio transceiver using a single Field Programmable Gate Array (FPGA). The FPGA allows the various subsystems to quickly communicate with each other and thereby maintain tight synchronization. We also investigate various hopping sequences against robustness to interference and jamming. Experimental results are presented that show the receiver sensitivity, radio data-rate/bit-error evaluations, and jamming and interference rejection capabilities of the implemented hybrid DS/FFH spread-spectrum system under widely varying design parameters.

  7. Spread Spectrum Techniques and their Applications to Wireless Communications

    DEFF Research Database (Denmark)

    Prasad, Ramjee; Cianca, E.

    2005-01-01

    Spread Spectrum (SS) radio communications is on the verge of potentially explosive commercial development An SS-based multiple access, such as CDMA, has been chosen for 3G wireless communications. Other current applications of SS techniues are in Wireless LANs and Satellite Navigation Systems...

  8. Spread Spectrum Receiver Electromagnetic Interference (EMI) Test Guide

    Science.gov (United States)

    Wheeler, M. L.

    1998-01-01

    The objective of this test guide is to document appropriate unit level test methods and techniques for the performance of EMI testing of Direct Sequence (DS) spread spectrum receivers. Consideration of EMI test methods tailored for spread spectrum receivers utilizing frequency spreading, techniques other than direct sequence (such as frequency hopping, frequency chirping, and various hybrid methods) is beyond the scope of this test guide development program and is not addressed as part of this document EMI test requirements for NASA programs are primarily developed based on the requirements contained in MIL-STD-46 1 D (or earlier revisions of MIL-STD-46 1). The corresponding test method guidelines for the MIL-STD-461 D tests are provided in MIL-STD-462D. These test methods are well documented with the exception of the receiver antenna port susceptibility tests (intermodulation, cross modulation, and rejection of undesired signals) which must be tailored to the specific type of receiver that is being tested. Thus, test methods addressed in this guide consist only of antenna port tests designed to evaluate receiver susceptibility characteristics. MIL-STD-462D should be referred for guidance pertaining to test methods for EMI tests other than the antenna port tests. The scope of this test guide includes: (1) a discussion of generic DS receiver performance characteristics; (2) a summary of S-band TDRSS receiver operation; (3) a discussion of DS receiver EMI susceptibility mechanisms and characteristics; (4) a summary of military standard test guidelines; (5) recommended test approach and methods; and (6) general conclusions and recommendations for future studies in the area of spread spectrum receiver testing.

  9. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    International Nuclear Information System (INIS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-01-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural

  10. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    Science.gov (United States)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural

  11. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solis Sanches, L. O.; Miranda, R. Castaneda; Cervantes Viramontes, J. M. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac (Mexico); Vega-Carrillo, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac., Mexico. and Unidad Academica de Estudios Nucleares. C. Cip (Mexico)

    2013-07-03

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in

  12. The Eco-Behavioral Approach to Surveys and Social Accounts for Rural Communities: Exploratory Analyses and Interpretations of Roger G. Barker's Microdata from the Behavior Setting Survey of Midwest, Kansas in 1963-64.

    Science.gov (United States)

    Fox, Karl A.

    The concept of behavior settings--the environments shaping individual behavior--was originated by Roger Barker in 1950 in connection with his community surveys in a small Kansas town, code-named Midwest. This book seeks to provide rural social scientists with an understanding of Barker's eco-behavioral approach and proposed adaptations of it to…

  13. Assessment of thema code against spreading experiments

    International Nuclear Information System (INIS)

    Spindler, B.; Veteau, J.M.; Cecco, L. de; Montanelli, P.; Pineau, D.

    2000-01-01

    In the frame work of severe accident research, the spreading code THEMA, developed at CEA/DRN, aims at predicting the spreading extent of molten core after a vessel melt-through. The code solves fluid balance equations integrated over the fluid depth for oxidic and/or metallic phases under the shallow water assumption, using a finite difference scheme. Solidification is taken into account through crust formation on the substrate and at contact with the surroundings, as well as increase of fluid viscosity with solid fraction in the melt. A separate energy equation is solved for the solid substrate, including possible ablation. The assessment of THEMA code against the spreading experiments performed in the framework of the corium spreading and coolability project of the European Union is presented. These experiments use either simulating materials at medium (RIT), or at high temperature (KATS), or corium (VULCANO, FARO), conducted at different mass flow rates and with large or low solidification interval. THEMA appears to be able to simulate the whole set of the experiments investigated. Comparison between experimental and computed spreading lengths and substrate temperatures are quite satisfactory. The results show a rather large sensitivity at mass flow rate and inlet temperature, indicating that, generally, efforts should be made to improve the accuracy of the measurements of such parameters in the experiments. (orig.)

  14. The choice: Lewellys F. Barker and the full-time plan.

    Science.gov (United States)

    Bryan, Charles S; Stinson, M Shawn

    2002-09-17

    In 1914, Lewellys F. Barker, William Osler's successor as Professor of Medicine and physician-in-chief at Johns Hopkins University School of Medicine, resigned to enter private practice rather than accept the terms of a full-time plan, whereby professors in clinical departments would be salaried like other professors in the university. Barker had been an early proponent of the full-time plan. His decision reflected not only a personal desire for a larger income but also contradictions inherent in the Flexnerian ideal of clinical medicine as a research-oriented university discipline devoid of financial incentives to see patients. In private practice, Barker maintained a high profile as a teacher, writer, supporter of the Johns Hopkins medical institutions, and public figure. The issues raised by his difficult decision remain relevant and have not been satisfactorily resolved.

  15. The VEGA Assembly Spectrum Code

    International Nuclear Information System (INIS)

    Milosevic, M.

    1997-01-01

    The VEGA is assembly spectrum code, developed as a design tool for producing a few-group averaged cross section data for a wide range of reactor types including both thermal and fast reactors. It belongs to a class of codes, which may be characterized by the separate stages for micro group, spectrum and macro group assembly calculations. The theoretical foundation for the development of the VEGA code was integral transport theory in the first-flight collision probability formulation. Two versions of VEGA are now in use, VEGA-1 established on standard equivalence theory and VEGA-2 based on new subgroup method applicable for any geometry for which a flux solution is possible. This paper describes a features which are unique to the VEGA codes with concentration on the basic principles and algorithms used in the proposed subgroup method. Presented validation of this method, comprise the results for a homogenous uranium-plutonium mixture and a PWR cell containing a recycled uranium-plutonium oxide. Example application for a realistic fuel dissolver benchmark problem , which was extensive analyzed in the international calculations, is also included. (author)

  16. Interference management using direct sequence spread spectrum ...

    African Journals Online (AJOL)

    Interference management using direct sequence spread spectrum (DSSS) technique ... Journal of Fundamental and Applied Sciences ... Keywords: DSSS, LTE network; Wi-Fi network; SINR; interference management and interference power.

  17. Melt spreading code assessment, modifications, and application to the EPR core catcher design

    International Nuclear Information System (INIS)

    Farmer, M.T.

    2009-01-01

    The Evolutionary Power Reactor (EPR) is under consideration by various utilities in the United States to provide base load electrical production, and as a result the design is undergoing a certification review by the U.S. Nuclear Regulatory Commission (NRC). The severe accident design philosophy for this reactor is based upon the fact that the projected power rating results in a narrow margin for in-vessel melt retention by external cooling of the reactor vessel. As a result, the design addresses ex-vessel core melt stabilization using a mitigation strategy that includes: (1) an external core melt retention system to temporarily hold core melt released from the vessel; (2) a layer of 'sacrificial' material that is admixed with the melt while in the core melt retention system; (3) a melt plug in the lower part of the retention system that, when failed, provides a pathway for the mixture to spread to a large core spreading chamber; and finally, (4) cooling and stabilization of the spread melt by controlled top and bottom flooding. The overall concept is illustrated in Figure 1.1. The melt spreading process relies heavily on inertial flow of a low-viscosity admixed melt to a segmented spreading chamber, and assumes that the melt mass will be distributed to a uniform height in the chamber. The spreading phenomenon thus needs to be modeled properly in order to adequately assess the EPR design. The MELTSPREAD code, developed at Argonne National Laboratory, can model segmented, and both uniform and nonuniform spreading. The NRC is thus utilizing MELTSPREAD to evaluate melt spreading in the EPR design. MELTSPREAD was originally developed to support resolution of the Mark I containment shell vulnerability issue. Following closure of this issue, development of MELTSPREAD ceased in the early 1990's, at which time the melt spreading database upon which the code had been validated was rather limited. In particular, the database that was utilized for initial validation consisted

  18. Principles of spread-spectrum communication systems

    CERN Document Server

    Torrieri, Don

    2015-01-01

    This book provides a concise but lucid explanation of the fundamentals of spread-spectrum systems with an emphasis on theoretical principles. The choice of specific topics is tempered by the author’s judgment of their practical significance and interest to both researchers and system designers. The book contains many improved derivations of the classical theory and presents the latest research results that bring the reader to the frontier of the field. This third edition includes new coverage of topics such as CDMA networks, acquisition and synchronization in DS-CDMA cellular networks, hopsets for FH-CDMA ad hoc networks, implications of information theory, the central limit theorem, the power spectral density of FH/CPM complex envelopes, adaptive filters, and adaptive arrays.   ·         Focuses on the fundamentals of spread-spectrum communication systems and provides current examples of their applications ·         Includes problem sets at the end of each chapter to assist readers in co...

  19. Synchronization in spread spectrum laser radar systems based on PMD-DLL

    Science.gov (United States)

    Buxbaum, Bernd; Schwarte, Rudolf; Ringbeck, Thorsten; Luan, Xuming; Zhang, Zhigang; Xu, Zhanping; Hess, H.

    2000-09-01

    .g., heterodyne techniques), in this contribution only so called quasi-heterodyne techniques - - also known as phase shifting methods -- are discussed and used for the implementation. The light modulation schemes described in this contribution are square-wave as well as pseudo-noise modulation. The latter approach, inspired by the wide spread use in communication as well as in position detection (e.g., IS-95 and GPS), offers essential advantages and is the most promising modulation method for the ranging approach. So called CDMA (code division multiple access) systems form a major task in communication technology investigations since the third generation mobile phone standard is also partly based on this principle. Fast and reliable synchronization in direct sequence spread spectrum communication systems (DSSS) differs hardly from the already mentioned ranging approach and will also be discussed. The possibility to integrate all components in a monolithic PMD based DLL design is also presented and discussed. This method might offer the feature to integrate complete lines or matrixes of PMD based DLLs for highly parallel, multidimensional ranging. Finally, an outlook is given with regard to further optimized PMD front ends. An estimation of the expected characteristics concerning accuracy and speed of the distance measurement is given in conclusion.

  20. Investigating Trauma in Narrating World War I: A Psychoanalytical Reading of Pat Barker's "Regeneration"

    Science.gov (United States)

    Sadjadi, Bakhtiar; Esmkhani, Farnaz

    2016-01-01

    The present paper seeks to critically read Pat Barker's "Regeneration" in terms of Cathy Caruth's psychoanalytic study of trauma. This analysis attempts to trace the concepts of latency, post-traumatic stress disorders, traumatic memory, and trauma in Barker's novel in order to explore how trauma and history are interrelated in the…

  1. A neutron spectrum unfolding code based on generalized regression artificial neural networks

    International Nuclear Information System (INIS)

    Rosario Martinez-Blanco, Ma. del

    2016-01-01

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. Novel methods based on Artificial Neural Networks have been widely investigated. In prior works, back propagation neural networks (BPNN) have been used to solve the neutron spectrometry problem, however, some drawbacks still exist using this kind of neural nets, i.e. the optimum selection of the network topology and the long training time. Compared to BPNN, it's usually much faster to train a generalized regression neural network (GRNN). That's mainly because spread constant is the only parameter used in GRNN. Another feature is that the network will converge to a global minimum, provided that the optimal values of spread has been determined and that the dataset adequately represents the problem space. In addition, GRNN are often more accurate than BPNN in the prediction. These characteristics make GRNNs to be of great interest in the neutron spectrometry domain. This work presents a computational tool based on GRNN capable to solve the neutron spectrometry problem. This computational code, automates the pre-processing, training and testing stages using a k-fold cross validation of 3 folds, the statistical analysis and the post-processing of the information, using 7 Bonner spheres rate counts as only entrance data. The code was designed for a Bonner Spheres System based on a "6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. - Highlights: • Main drawback of neutron spectrometry with BPNN is network topology optimization. • Compared to BPNN, it’s usually much faster to train a (GRNN). • GRNN are often more accurate than BPNN in the prediction. These characteristics make GRNNs to be of great interest. • This computational code, automates the pre-processing, training

  2. The APOLLO assembly spectrum code

    International Nuclear Information System (INIS)

    Kavenoky, A.; Sanchez, R.

    1987-04-01

    The APOLLO code was originally developed as a design tool for HTR's, later it was aimed at the calculation of PWR lattices. APOLLO is a general purpose assembly spectrum code based on the multigroup integral transport equation; refined collision probability modules allow the computation of 1D geometries with linearly anisotropic scattering and two term flux expansion. In 2D geometries modules based on the substructure method provide fast and accurate design calculations and a module based on a direct discretization is devoted to reference calculations. The SPH homogenization technique provides corrected cross sections performing an equivalence between coarse and refined calculations. The post processing module of APOLLO generate either APOLLIB to be used by APOLLO or NEPLIB for reactor diffusion calculation. The cross section library of APOLLO contains data and self-shielding data for more than 400 isotopes. APOLLO is able to compute the depletion of any medium accounting for any heavy isotope or fission product chain. 21 refs

  3. Spread Spectrum Based Energy Efficient Collaborative Communication in Wireless Sensor Networks.

    Science.gov (United States)

    Ghani, Anwar; Naqvi, Husnain; Sher, Muhammad; Khan, Muazzam Ali; Khan, Imran; Irshad, Azeem

    2016-01-01

    Wireless sensor networks consist of resource limited devices. Most crucial of these resources is battery life, as in most applications like battle field or volcanic area monitoring, it is often impossible to replace or recharge the power source. This article presents an energy efficient collaborative communication system based on spread spectrum to achieve energy efficiency as well as immunity against jamming, natural interference, noise suppression and universal frequency reuse. Performance of the proposed system is evaluated using the received signal power, bit error rate (BER) and energy consumption. The results show a direct proportionality between the power gain and the number of collaborative nodes as well as BER and signal-to-noise ratio (Eb/N0). The analytical and simulation results of the proposed system are compared with SISO system. The comparison reveals that SISO perform better than collaborative communication in case of small distances whereas collaborative communication performs better than SISO in case of long distances. On the basis of these results it is safe to conclude that collaborative communication in wireless sensor networks using wideband systems improves the life time of nodes in the networks thereby prolonging the network's life time.

  4. THE BARKER HYPOTHESIS: IMPLICATIONS FOR FUTURE DIRECTIONS IN TOXICOLOGY RESEARCH

    Science.gov (United States)

    This review covers the past year’s papers germane to the Barker hypothesis. While much of the literature has centered on maternal and developmental nutrition, new findings have emerged on the ability of toxic exposures during development to impact fetal/developmental programming....

  5. Suppression of narrow-band interference in a PN spread-spectrum receiver using a CTD-based adaptive filter

    Science.gov (United States)

    Saulnier, G. J.; Das, P.; Milstein, L. B.

    1984-11-01

    Analytical results have shown that adaptive filtering can be a powerful tool for the rejection of narrow-band interference in a spread-spectrum receiver. However, the complexity of adaptive filtering hardware has hindered the experimental verification of these results. This paper describes a new adaptive filter architecture for implementing the Widrow-Hoff LMS algorithm while using only two multipliers regardless of filter order. This hardware simplification is achieved through the use of a burst processing technique. A 16-tap version of this adaptive filter constructed using charge-transfer devices (CTD's) is used to suppress a single tone jammer in a direct sequence spread-spectrum receiver. Probability of error measurements demonstrating the effectiveness of the adaptive filter for suppressing the single tone jammer along with simulation results for the optimal Weiner-Hopf filter are presented and discussed.

  6. Steganography on multiple MP3 files using spread spectrum and Shamir's secret sharing

    Science.gov (United States)

    Yoeseph, N. M.; Purnomo, F. A.; Riasti, B. K.; Safiie, M. A.; Hidayat, T. N.

    2016-11-01

    The purpose of steganography is how to hide data into another media. In order to increase security of data, steganography technique is often combined with cryptography. The weakness of this combination technique is the data was centralized. Therefore, a steganography technique is develop by using combination of spread spectrum and secret sharing technique. In steganography with secret sharing, shares of data is created and hidden in several medium. Medium used to concealed shares were MP3 files. Hiding technique used was Spread Spectrum. Secret sharing scheme used was Shamir's Secret Sharing. The result showed that steganography with spread spectrum combined with Shamir's Secret Share using MP3 files as medium produce a technique that could hid data into several cover. To extract and reconstruct the data hidden in stego object, it is needed the amount of stego object which more or equal to its threshold. Furthermore, stego objects were imperceptible and robust.

  7. Status of the assessment of the spreading code Thema against the Corine experiments

    International Nuclear Information System (INIS)

    Spindler, B.; Veteau, J.M.

    1999-01-01

    In the framework of severe accident research on PWR, the Thema code aims at predicting the spreading extent of Corium in given conditions of pouring rate, initial Corium composition and temperature and considers phenomena as complex as top, bottom freezing and melting of the substrate. This paper makes the current status of the assessment of the code against the Corine experimental program which considers separate effect tests working out non freezing and low melting point simulating materials to validate some essential models present in spreading codes. Isothermal tests using water-glycerol mixtures are first considered to investigate the validity of the friction law and the extent of surface tension effects at the front. Non isothermal spreading with bottom freezing is then considered. Comparison of results of the code with known solutions of different problems related to solidification of a moving warm liquid, thermal chock and conduction in the bottom plate appears to be a very useful tool to verify the relevance of the models and to adjust numerical parameters. Finally, first spreading calculations with bottom freezing are compared with Corine experiments using the eutectic Bismuth-Tin alloy as working material. (author)

  8. Defining a methodology for benchmarking spectrum unfolding codes

    International Nuclear Information System (INIS)

    Meyer, W.; Kirmser, P.G.; Miller, W.H.; Hu, K.K.

    1976-01-01

    It has long been recognized that different neutron spectrum unfolding codes will produce significantly different results when unfolding the same measured data. In reviewing the results of such analyses it has been difficult to determine which result if any is the best representation of what was measured by the spectrometer detector. A proposal to develop a benchmarking procedure for spectrum unfolding codes is presented. The objective of the procedure will be to begin to develop a methodology and a set of data with a well established and documented result that could be used to benchmark and standardize the various unfolding methods and codes. It is further recognized that development of such a benchmark must involve a consensus of the technical community interested in neutron spectrum unfolding

  9. A novel neutron energy spectrum unfolding code using particle swarm optimization

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-01-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code. - Highlights: • Introducing a novel method for neutron spectrum unfolding. • Implementation of a particle swarm optimization code for neutron unfolding. • Comparing results of the PSO code with those of recently published TGASU code. • Match results of the PSO code with those of TGASU code. • Greater convergence rate of implemented PSO code than TGASU code.

  10. A Synchronisation Method For Informed Spread-Spectrum Audiowatermarking

    OpenAIRE

    Pierre-Yves Fulchiron; Barry O'Donovan; Guenole Silvestre; Neil Hurley

    2003-01-01

    Under perfect synchronisation conditions, watermarking schemes employing asymmetric spread-spectrum techniques are suitable for copy-protection of audio signals. This paper proposes to combine the use of a robust psychoacoustic projection for the extraction of a watermark feature vector along with non-linear detection functions optimised with side-information. The new proposed scheme benefits from an increased level of security through the use of asymmetric detectors. We apply this scheme to ...

  11. Wavelet versus DCT-based spread spectrum watermarking of image databases

    Science.gov (United States)

    Mitrea, Mihai P.; Zaharia, Titus B.; Preteux, Francoise J.; Vlad, Adriana

    2004-05-01

    This paper addresses the issue of oblivious robust watermarking, within the framework of colour still image database protection. We present an original method which complies with all the requirements nowadays imposed to watermarking applications: robustness (e.g. low-pass filtering, print & scan, StirMark), transparency (both quality and fidelity), low probability of false alarm, obliviousness and multiple bit recovering. The mark is generated from a 64 bit message (be it a logo, a serial number, etc.) by means of a Spread Spectrum technique and is embedded into DWT (Discrete Wavelet Transform) domain, into certain low frequency coefficients, selected according to the hierarchy of their absolute values. The best results were provided by the (9,7) bi-orthogonal transform. The experiments were carried out on 1200 image sequences, each of them of 32 images. Note that these sequences represented several types of images: natural, synthetic, medical, etc. and each time we obtained the same good results. These results are compared with those we already obtained for the DCT domain, the differences being pointed out and discussed.

  12. Wavelet based mobile video watermarking: spread spectrum vs. informed embedding

    Science.gov (United States)

    Mitrea, M.; Prêteux, F.; Duţă, S.; Petrescu, M.

    2005-11-01

    The cell phone expansion provides an additional direction for digital video content distribution: music clips, news, sport events are more and more transmitted toward mobile users. Consequently, from the watermarking point of view, a new challenge should be taken: very low bitrate contents (e.g. as low as 64 kbit/s) are now to be protected. Within this framework, the paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack. By applying an advanced statistical investigation (combining the Chi square, Ro, Fisher and Student tests) in the discrete wavelet domain, it is established that the popular Gaussian assumption can be very restrictively used when describing the former process and has nothing to do with the latter. As these results can a priori determine the performances of several watermarking methods, both of spread spectrum and informed embedding types, they should be considered in the design stage.

  13. Interference Excision in Spread Spectrum Communications Using Adaptive Positive Time-Frequency Analysis

    Directory of Open Access Journals (Sweden)

    Krishnan Sridhar

    2007-01-01

    Full Text Available This paper introduces a novel algorithm to excise single and multicomponent chirp-like interferences in direct sequence spread spectrum (DSSS communications. The excision algorithm consists of two stages: adaptive signal decomposition stage and directional element detection stage based on the Hough-Radon transform (HRT. Initially, the received spread spectrum signal is decomposed into its time-frequency (TF functions using an adaptive signal decomposition algorithm, and the resulting TF functions are mapped onto the TF plane. We then use a line detection algorithm based on the HRT that operates on the image of the TF plane and detects energy varying directional elements that satisfy a parametric constraint. Interference is modeled by reconstructing the corresponding TF functions detected by the HRT, and subtracted from the received signal. The proposed technique has two main advantages: (i it localizes the interferences on the TF plane with no cross-terms, thus facilitating simple filtering techniques based on thresholding of the TF functions, and is an efficient way to excise the interference; (ii it can be used for the detection of any directional interferences that can be parameterized. Simulation results with synthetic models have shown successful performance with linear and quadratic chirp interferences for single and multicomponent interference cases. The proposed method excises the interference even under very low SNR conditions of  dB, and the technique could be easily extended to any interferences that could be represented by a parametric equation in the TF plane.

  14. A neutron spectrum unfolding code based on generalized regression artificial neural networks

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Castaneda M, R.; Solis S, L. O.; Vega C, H. R.

    2015-10-01

    The most delicate part of neutron spectrometry, is the unfolding process. Then derivation of the spectral information is not simple because the unknown is not given directly as result of the measurements. Novel methods based on Artificial Neural Networks have been widely investigated. In prior works, back propagation neural networks (BPNN) have been used to solve the neutron spectrometry problem, however, some drawbacks still exist using this kind of neural nets, as the optimum selection of the network topology and the long training time. Compared to BPNN, is usually much faster to train a generalized regression neural network (GRNN). That is mainly because spread constant is the only parameter used in GRNN. Another feature is that the network will converge to a global minimum. In addition, often are more accurate than BPNN in prediction. These characteristics make GRNN be of great interest in the neutron spectrometry domain. In this work is presented a computational tool based on GRNN, capable to solve the neutron spectrometry problem. This computational code, automates the pre-processing, training and testing stages, the statistical analysis and the post-processing of the information, using 7 Bonner spheres rate counts as only entrance data. The code was designed for a Bonner Spheres System based on a 6 LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. (Author)

  15. A neutron spectrum unfolding code based on generalized regression artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Martinez B, M. R.; Castaneda M, R.; Solis S, L. O. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico); Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. Then derivation of the spectral information is not simple because the unknown is not given directly as result of the measurements. Novel methods based on Artificial Neural Networks have been widely investigated. In prior works, back propagation neural networks (BPNN) have been used to solve the neutron spectrometry problem, however, some drawbacks still exist using this kind of neural nets, as the optimum selection of the network topology and the long training time. Compared to BPNN, is usually much faster to train a generalized regression neural network (GRNN). That is mainly because spread constant is the only parameter used in GRNN. Another feature is that the network will converge to a global minimum. In addition, often are more accurate than BPNN in prediction. These characteristics make GRNN be of great interest in the neutron spectrometry domain. In this work is presented a computational tool based on GRNN, capable to solve the neutron spectrometry problem. This computational code, automates the pre-processing, training and testing stages, the statistical analysis and the post-processing of the information, using 7 Bonner spheres rate counts as only entrance data. The code was designed for a Bonner Spheres System based on a {sup 6}LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. (Author)

  16. The MELTSPREAD Code for Modeling of Ex-Vessel Core Debris Spreading Behavior, Code Manual – Version3-beta

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M. T. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input that describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of

  17. HF band filter bank multi-carrier spread spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Laraway, Stephen Andrew; Moradi, Hussein; Farhang-Boroujeny, Behrouz

    2015-10-01

    Abstract—This paper describes modifications to the filter bank multicarrier spread spectrum (FB-MC-SS) system, that was presented in [1] and [2], to enable transmission of this waveform in the HF skywave channel. FB-MC-SS is well suited for the HF channel because it performs well in channels with frequency selective fading and interference. This paper describes new algorithms for packet detection, timing recovery and equalization that are suitable for the HF channel. Also, an algorithm for optimizing the peak to average power ratio (PAPR) of the FBMC- SS waveform is presented. Application of this algorithm results in a waveform with low PAPR. Simulation results using a wide band HF channel model demonstrate the robustness of this system over a wide range of delay and Doppler spreads.

  18. Phase-coded microwave signal generation based on a single electro-optical modulator and its application in accurate distance measurement.

    Science.gov (United States)

    Zhang, Fangzheng; Ge, Xiaozhong; Gao, Bindong; Pan, Shilong

    2015-08-24

    A novel scheme for photonic generation of a phase-coded microwave signal is proposed and its application in one-dimension distance measurement is demonstrated. The proposed signal generator has a simple and compact structure based on a single dual-polarization modulator. Besides, the generated phase-coded signal is stable and free from the DC and low-frequency backgrounds. An experiment is carried out. A 2 Gb/s phase-coded signal at 20 GHz is successfully generated, and the recovered phase information agrees well with the input 13-bit Barker code. To further investigate the performance of the proposed signal generator, its application in one-dimension distance measurement is demonstrated. The measurement accuracy is less than 1.7 centimeters within a measurement range of ~2 meters. The experimental results can verify the feasibility of the proposed phase-coded microwave signal generator and also provide strong evidence to support its practical applications.

  19. Melt spreading code assessment, modifications, and initial application to the EPR core catcher design

    International Nuclear Information System (INIS)

    Farmer, M.T.; Basu, S.

    2009-01-01

    The Evolutionary Power Reactor (EPR) is a 1,600-MWe Pressurized Water Reactor (PWR) that is undergoing a design certification review by the U.S. Nuclear Regulatory Commission (NRC). The EPR severe accident design philosophy is predicated upon the fact that the projected power rating results in a narrow margin for in-vessel melt retention by external flooding. As a result, the design addresses ex-vessel core melt stabilization using a mitigation strategy that includes: 1) an external core melt retention system to temporarily hold core melt released from the vessel; 2) a layer of 'sacrificial' material that is admixed with the melt while in the core melt retention system; 3) a melt plug that, when failed, provides a pathway for the mixture to spread to a large core spreading chamber; and finally, 4) cooling and stabilization of the spread melt by controlled top and bottom flooding. The melt spreading process relies heavily on inertial flow of a low-viscosity admixed melt to a segmented spreading chamber, and assumes that the melt mass will be distributed to a uniform height in the chamber. The spreading phenomenon thus needs to be modeled properly in order to adequately assess the EPR design. The MELTSPREAD code, developed at Argonne National Laboratory, can model segmented, and both uniform and non-uniform spreading. The NRC is using MELTSPREAD to evaluate melt spreading in the EPR design. The development of MELTSPREAD ceased in the early 1990's, and so the code was first assessed against the more contemporary spreading database and code modifications, as warranted, were carried out before performing confirmatory plant calculations. This paper provides principle findings from the MELTSPREAD assessment activities and resulting code modifications, and also summarizes the results of initial scoping calculations for the EPR plant design and preliminary plant analyses, along with the plan for performing the final set of plant calculations including sensitivity studies

  20. Differential signaling spread-spectrum modulation of the LED visible light wireless communications using a mobile-phone camera

    Science.gov (United States)

    Chen, Shih-Hao; Chow, Chi-Wai

    2015-02-01

    Visible light communication (VLC) using spread spectrum modulation (SSM) and differential signaling (DS), detected by a mobile-phone camera is proposed and demonstrated for the first time to provide high immunity to background ambient light interference. The SSM signal provides the coding gain while the DS scheme enhances the clock recovery particular under high background ambient light. Experiment results confirm the feasibility of the proposed scheme, showing that the proposed system has 6-dB gain comparing with the traditional on-off keying (OOK) modulation under background ambient light of 3000 lux. The direct incident ambient light to the mobile-phone camera is 520 lux.

  1. New PN Even Balanced Sequences for Spread-Spectrum Systems

    Directory of Open Access Journals (Sweden)

    Inácio JAL

    2005-01-01

    Full Text Available A new class of pseudonoise even balanced (PN-EB binary spreading sequences is derived from existing classical odd-length families of maximum-length sequences, such as those proposed by Gold, by appending or inserting one extra-zero element (chip to the original sequences. The incentive to generate large families of PN-EB spreading sequences is motivated by analyzing the spreading effect of these sequences from a natural sampling point of view. From this analysis a new definition for PG is established, from which it becomes clear that very high processing gains (PGs can be achieved in band-limited direct-sequence spread-spectrum (DSSS applications by using spreading sequences with zero mean, given that certain conditions regarding spectral aliasing are met. To obtain large families of even balanced (i.e., equal number of ones and zeros sequences, two design criteria are proposed, namely the ranging criterion (RC and the generating ranging criterion (GRC. PN-EB sequences in the polynomial range are derived using these criteria, and it is shown that they exhibit secondary autocorrelation and cross-correlation peaks comparable to the sequences they are derived from. The methods proposed not only facilitate the generation of large numbers of new PN-EB spreading sequences required for CDMA applications, but simultaneously offer high processing gains and good despreading characteristics in multiuser SS scenarios with band-limited noise and interference spectra. Simulation results are presented to confirm the respective claims made.

  2. Interference Excision in Spread Spectrum Communications Using Adaptive Positive Time-Frequency Analysis

    Directory of Open Access Journals (Sweden)

    Sridhar Krishnan

    2007-07-01

    Full Text Available This paper introduces a novel algorithm to excise single and multicomponent chirp-like interferences in direct sequence spread spectrum (DSSS communications. The excision algorithm consists of two stages: adaptive signal decomposition stage and directional element detection stage based on the Hough-Radon transform (HRT. Initially, the received spread spectrum signal is decomposed into its time-frequency (TF functions using an adaptive signal decomposition algorithm, and the resulting TF functions are mapped onto the TF plane. We then use a line detection algorithm based on the HRT that operates on the image of the TF plane and detects energy varying directional elements that satisfy a parametric constraint. Interference is modeled by reconstructing the corresponding TF functions detected by the HRT, and subtracted from the received signal. The proposed technique has two main advantages: (i it localizes the interferences on the TF plane with no cross-terms, thus facilitating simple filtering techniques based on thresholding of the TF functions, and is an efficient way to excise the interference; (ii it can be used for the detection of any directional interferences that can be parameterized. Simulation results with synthetic models have shown successful performance with linear and quadratic chirp interferences for single and multicomponent interference cases. The proposed method excises the interference even under very low SNR conditions of −10 dB, and the technique could be easily extended to any interferences that could be represented by a parametric equation in the TF plane.

  3. Composite Binary Sequences with a Large Ensemble and Zero Correlation Zone

    Directory of Open Access Journals (Sweden)

    S. S. Yudachev

    2015-01-01

    Full Text Available The article considers a proposed class of derived signals such as composite binary sequences for application in advanced spread spectrum radio systems of various purposes, using signals based on spectrum spreading by direct sequence method. Considered composite sequences, having a representative set of lengths and unique correlation properties, compares favorably with the widely used at present large ensembles formed on a single algorithmic basis. To evaluate the properties of the composite sequences generated on the basis of two components - the Barker code and Kerdock sequences, expressions of periodic and aperiodic correlation functions are given.An algorithm for generating practical ensembles of composite sequences is presented. On the basis of the algorithm and its software implementation in C #, the samples of the sequence ensembles of various lengths were obtained and their periodic and aperiodic correlation functions and statistical characteristics were studied in detail. As an illustration, some of the most typical correlation functions are presented. The most remarkable characteristics allowing a ssessing the feasibility of using this type of sequences in the design of specific types of radio systems are considered.On the basis of the proposed program and the performed calculations the conclusions can be drawn about the possibility of using the sequences of these classes, with the aim of reducing intra-system disturbance in the projected spread spectrum CDMA.

  4. Evaluation of a new neutron energy spectrum unfolding code based on an Adaptive Neuro-Fuzzy Inference System (ANFIS).

    Science.gov (United States)

    Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman

    2018-01-17

    The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were

  5. Efficiently Synchronized Spread-Spectrum Audio Watermarking with Improved Psychoacoustic Model

    Directory of Open Access Journals (Sweden)

    Xing He

    2008-01-01

    Full Text Available This paper presents an audio watermarking scheme which is based on an efficiently synchronized spread-spectrum technique and a new psychoacoustic model computed using the discrete wavelet packet transform. The psychoacoustic model takes advantage of the multiresolution analysis of a wavelet transform, which closely approximates the standard critical band partition. The goal of this model is to include an accurate time-frequency analysis and to calculate both the frequency and temporal masking thresholds directly in the wavelet domain. Experimental results show that this watermarking scheme can successfully embed watermarks into digital audio without introducing audible distortion. Several common watermark attacks were applied and the results indicate that the method is very robust to those attacks.

  6. A Synchronisation Method For Informed Spread-Spectrum Audiowatermarking

    Directory of Open Access Journals (Sweden)

    Pierre-Yves Fulchiron

    2003-12-01

    Full Text Available Under perfect synchronisation conditions, watermarking schemes employing asymmetric spread-spectrum techniques are suitable for copy-protection of audio signals. This paper proposes to combine the use of a robust psychoacoustic projection for the extraction of a watermark feature vector along with non-linear detection functions optimised with side-information. The new proposed scheme benefits from an increased level of security through the use of asymmetric detectors. We apply this scheme to real audio signals and experimental results show an increased robustness to desynchronisation attacks such as random cropping.

  7. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solis Sanches, L. O.; Miranda, R. Castaneda; Cervantes Viramontes, J. M. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac (Mexico); Vega-Carrillo, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac., Mexico. and Unidad Academica de Estudios Nucleares. C. Cip (Mexico)

    2013-07-03

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of {sup 252}Cf, {sup 241}AmBe and {sup 239}PuBe neutron sources measured with a Bonner spheres system.

  8. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    International Nuclear Information System (INIS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-01-01

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252 Cf, 241 AmBe and 239 PuBe neutron sources measured with a Bonner spheres system

  9. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    Science.gov (United States)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called "Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres", (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the "Robust design of artificial neural networks methodology" and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252Cf, 241AmBe and 239PuBe neutron sources measured with a Bonner spheres system.

  10. Status of the MELTSPREAD-1 computer code for the analysis of transient spreading of core debris melts

    International Nuclear Information System (INIS)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.

    1992-01-01

    A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code

  11. Coded excitation for infrared non-destructive testing of carbon fiber reinforced plastics.

    Science.gov (United States)

    Mulaveesala, Ravibabu; Venkata Ghali, Subbarao

    2011-05-01

    This paper proposes a Barker coded excitation for defect detection using infrared non-destructive testing. Capability of the proposed excitation scheme is highlighted with recently introduced correlation based post processing approach and compared with the existing phase based analysis by taking the signal to noise ratio into consideration. Applicability of the proposed scheme has been experimentally validated on a carbon fiber reinforced plastic specimen containing flat bottom holes located at different depths.

  12. A comparative study of pseudorandom sequences used in a c-VEP based BCI for online wheelchair control

    DEFF Research Database (Denmark)

    Isaksen, Jonas L.; Mohebbi, Ali; Puthusserypady, Sadasivan

    2016-01-01

    In this study, a c-VEP based BCI system was developed to run on three distinctive pseudorandom sequences, namely the m-code, the Gold-code, and the Barker-code. The Visual Evoked Potentials (VEPs) were provoked using these codes. In the online session, subjects controlled a LEGO® Mindstorms® robot...

  13. Comparison of neutron spectrum unfolding codes

    International Nuclear Information System (INIS)

    Zijp, W.

    1979-02-01

    This final report contains a set of four ECN-reports. The first is dealing with the comparison of the neutron spectrum unfolding codes CRYSTAL BALL, RFSP-JUL, SAND II and STAY'SL. The other three present the results of calculations about the influence of statistical weights in CRYSTAL BALL, SAND II and RFSP-JUL

  14. Secure DS-CDMA spreading codes using fully digital multidimensional multiscroll chaos

    KAUST Repository

    Mansingka, Abhinav S.

    2014-06-18

    This paper introduces a generalized fully digital hardware implementation of 1-D, 2-D and 3-D multiscroll chaos through sawtooth nonlinearities in a 3rd order ODE with the Euler approximation, wherein low-significance bits pass all NIST SP. 800-22 tests. The low-significance bits show good performance as spreading code for multiple-access DS-CDMA in AWGN and multipath environments, equivalent to Gold codes. This system capitalizes on complex nonlinear dynamics afforded by multiscroll chaos to provide higher security than conventional codes with the same BER performance demonstrated experimentally on a Xilinx Virtex 4 FPGA with logic utilization less than 1.25% and throughput up to 10.92 Gbits/s.

  15. Method and device for fast code acquisition in spread spectrum receivers

    NARCIS (Netherlands)

    Coenen, A.J.R.M.

    1993-01-01

    Abstract of NL 9101155 (A) Method for code acquisition in a satellite receiver. The biphase-modulated high-frequency carrier transmitted by a satellite is converted via a fixed local oscillator frequency down to the baseband, whereafter the baseband signal is fed via a bandpass filter, which has an

  16. Fault Detection of Aircraft Cable via Spread Spectrum Time Domain Reflectometry

    Directory of Open Access Journals (Sweden)

    Xudong SHI

    2014-03-01

    Full Text Available As the airplane cable fault detection based on TDR (time domain reflectometry is affected easily by various noise signals, which makes the reflected signal attenuate and distort heavily, failing to locate the fault. In order to solve these problems, a method of spread spectrum time domain reflectometry (SSTDR is introduced in this paper, taking the advantage of the sharp peak of correlation function. The test signal is generated from ML sequence (MLS modulated by sine wave in the same frequency. Theoretically, the test signal has the very high immunity of noise, which can be applied with excellent precision to fault location on the aircraft cable. In this paper, the method of SSTDR was normally simulated in MATLAB. Then, an experimental setup, based on LabVIEW, was organized to detect and locate the fault on the aircraft cable. It has been demonstrated that SSTDR has the high immunity of noise, reducing some detection errors effectively.

  17. Regeneration de Pat Barker (1991 : la gestion des maux/mots de la Grande Guerre par l’institution psychiatrique

    Directory of Open Access Journals (Sweden)

    Isabelle Gérardin

    2011-04-01

    Full Text Available Cet article se propose d’étudier une œuvre publiée en 1991 par la romancière anglaise Pat Barker : Regeneration. Ce roman, qui a pour toile de fond la Grande Guerre, ne dépeint ni combats ni scènes de la vie des tranchées. Barker préfère représenter toute la dimension agonistique de ce conflit mondial à travers la question de la prise en charge des traumatismes de guerre des soldats par l’institution psychiatrique. Ces traumatismes, ayant diverses manifestations psychosomatiques, témoignent d’une forte tension entre une profonde aversion de l’individu pour l’extrême violence des affrontements et un appel à se conformer à une certaine culture de guerre collective fondée sur un idéal fort de virilité martiale et patriotique. Face à ces traumatismes, ainsi que Pat Barker le met en lumière, la psychiatrie de guerre se révèle souvent fort ambiguë. En effet, l’institution psychiatrique œuvre en continuité avec l’institution militaire, désirant imposer un contrôle sur les âmes autant que sur les corps. Regeneration nous invite donc à lire combien les maux/mots de la Grande Guerre ont eu grand peine à se faire entendre.This article deals with a novel, entitled Regeneration, published in 1991 by the English female author Pat Barker. This novel, although set against the backdrop of World War One, depicts neither battles nor scenes of everyday life in the trenches. Instead, the agonistic dimension of this world conflict is tackled by Pat Barker through the prism of the way the psychiatric institution dealt with war traumas endured by soldiers, traumas of the body and of the mind. These traumas, causing a variety of psychosomatic symptoms, reveal a powerful tension between a deep individual aversion to the extreme violence of combat and an appeal to conform to some collective culture of war rooted in a strong ideal of soldierly and patriotic manliness. Wartime psychiatry, as Barker highlights, often turns out to

  18. Channel coding for underwater acoustic single-carrier CDMA communication system

    Science.gov (United States)

    Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong

    2017-01-01

    CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.

  19. Hybrid PAPR reduction scheme with Huffman coding and DFT-spread technique for direct-detection optical OFDM systems

    Science.gov (United States)

    Peng, Miao; Chen, Ming; Zhou, Hui; Wan, Qiuzhen; Jiang, LeYong; Yang, Lin; Zheng, Zhiwei; Chen, Lin

    2018-01-01

    High peak-to-average power ratio (PAPR) of the transmit signal is a major drawback in optical orthogonal frequency division multiplexing (OOFDM) system. In this paper, we propose and experimentally demonstrate a novel hybrid scheme, combined the Huffman coding and Discrete Fourier Transmission-Spread (DFT-spread), in order to reduce high PAPR in a 16-QAM short-reach intensity-modulated and direct-detection OOFDM (IMDD-OOFDM) system. The experimental results demonstrated that the hybrid scheme can reduce the PAPR by about 1.5, 2, 3 and 6 dB, and achieve 1.5, 1, 2.5 and 3 dB receiver sensitivity improvement compared to clipping, DFT-spread and Huffman coding and original OFDM signals, respectively, at an error vector magnitude (EVM) of -10 dB after transmission over 20 km standard single-mode fiber (SSMF). Furthermore, the throughput gain can be of the order of 30% by using the hybrid scheme compared with the cases of without applying the Huffman coding.

  20. 抑制扩频系统中窄带干扰的新卡尔曼滤波算法%New Kalman Filtering Algorithm for Narrowband Interference Suppression in Spread Spectrum Systems

    Institute of Scientific and Technical Information of China (English)

    许光辉; 胡光锐

    2005-01-01

    A new Kalman filtering algorithm based on estimation of spread spectrum signal before suppression of narrowband interference (NBI) in spread spectrum systems, using the dependence of autoregressive (AR) interference, is presented compared with performance of the ACM nonlinear filtering algorithm, simulation results show that the proposed algorithm has preferable performance, there is about 5 dB SNR improvement in average.

  1. Design of a TDOA location engine and development of a location system based on chirp spread spectrum.

    Science.gov (United States)

    Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang

    2016-01-01

    Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.

  2. Fixed capacity and variable member grouping assignment of orthogonal variable spreading factor code tree for code division multiple access networks

    Directory of Open Access Journals (Sweden)

    Vipin Balyan

    2014-08-01

    Full Text Available Orthogonal variable spreading factor codes are used in the downlink to maintain the orthogonality between different channels and are used to handle new calls arriving in the system. A period of operation leads to fragmentation of vacant codes. This leads to code blocking problem. The assignment scheme proposed in this paper is not affected by fragmentation, as the fragmentation is generated by the scheme itself. In this scheme, the code tree is divided into groups whose capacity is fixed and numbers of members (codes are variable. A group with maximum number of busy members is used for assignment, this leads to fragmentation of busy groups around code tree and compactness within group. The proposed scheme is well evaluated and compared with other schemes using parameters like code blocking probability and call establishment delay. Through simulations it has been demonstrated that the proposed scheme not only adequately reduces code blocking probability, but also requires significantly less time before assignment to locate a vacant code for assignment, which makes it suitable for the real-time calls.

  3. BINGO: a code for the efficient computation of the scalar bi-spectrum

    Science.gov (United States)

    Hazra, Dhiraj Kumar; Sriramkumar, L.; Martin, Jérôme

    2013-05-01

    We present a new and accurate Fortran code, the BI-spectra and Non-Gaussianity Operator (BINGO), for the efficient numerical computation of the scalar bi-spectrum and the non-Gaussianity parameter fNL in single field inflationary models involving the canonical scalar field. The code can calculate all the different contributions to the bi-spectrum and the parameter fNL for an arbitrary triangular configuration of the wavevectors. Focusing firstly on the equilateral limit, we illustrate the accuracy of BINGO by comparing the results from the code with the spectral dependence of the bi-spectrum expected in power law inflation. Then, considering an arbitrary triangular configuration, we contrast the numerical results with the analytical expression available in the slow roll limit, for, say, the case of the conventional quadratic potential. Considering a non-trivial scenario involving deviations from slow roll, we compare the results from the code with the analytical results that have recently been obtained in the case of the Starobinsky model in the equilateral limit. As an immediate application, we utilize BINGO to examine of the power of the non-Gaussianity parameter fNL to discriminate between various inflationary models that admit departures from slow roll and lead to similar features in the scalar power spectrum. We close with a summary and discussion on the implications of the results we obtain.

  4. An adaptive digital suppression filter for direct-sequence spread-spectrum communications

    Science.gov (United States)

    Saulnier, G. J.; Das, P. K.; Milstein, L. B.

    1985-09-01

    This paper describes the structure of a digital implementation of the Widrow-Hoff LMS algorithm which uses a burst processing technique to obtain some hardware simplification. This adaptive system is used to suppress narrow-band interference in a direct-sequence spread-spectrum communication system. Several different narrow-band interferers are considered, and probability of error results are presented for all cases. While, in general, the results show significant improvement in performance when the LMS algorithm is used, certain disadvantages are also present and are discussed in this paper.

  5. Neutron spectrum unfolding using computer code SAIPS

    International Nuclear Information System (INIS)

    Karim, S.

    1999-01-01

    The main objective of this project was to study the neutron energy spectrum at rabbit station-1 in Pakistan Research Reactor (PARR-I). To do so, multiple foils activation method was used to get the saturated activities. The computer code SAIPS was used to unfold the neutron spectra from the measured reaction rates. Of the three built in codes in SAIPS, only SANDI and WINDOWS were used. Contribution of thermal part of the spectra was observed to be higher than the fast one. It was found that the WINDOWS gave smooth spectra while SANDII spectra have violet oscillations in the resonance region. The uncertainties in the WINDOWS results are higher than those of SANDII. The results show reasonable agreement with the published results. (author)

  6. Unfolding code for neutron spectrometry based on neural nets technology

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the R obust Design of Artificial Neural Networks Methodology . The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6 Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  7. Evaluation of an Acoustic Charge Transport (ACT) device for adaptive interference suppression in spread spectrum communications systems

    Science.gov (United States)

    Mills, Michael S.

    1993-12-01

    Analytical results have shown that adaptive filtering can be a powerful tool for the rejection of narrowband interference in a direct sequence spread spectrum receiver. However, the complexity of adaptive filtering hardware has hindered the experimental validation of these results. This thesis describes a unique adaptive filter architecture for implementing the Widrow-Hoff least mean square (LMS) algorithm using two state of the art acoustic charge transport (ACT) programmable transversal filters (PTF's). Signal to noise ratio improvement measurements demonstrate the effectiveness of the adaptive filter for suppressing single- and dual-tone jammers at jammer to signal ratios (JSR's) of up to 30 dB. It is shown that the ACT adaptive interference rejection system can consistently produce 55 dB notch depths with 3-dB bandwidths as low as 300 kHz with minimal degradation to the spread spectrum signal. It is also shown that the adaptive system can eliminate single tone jammers at any frequency within the spread spectrum bandwidth at any of 10, 20, or 30 dB JSRs within 10 to 15 iterations of the adaptive algorithm. The only drawback with the adaptive system as tested is the amount of time taken to perform an iteration because of the requirement to update the PTF tap weights sequentially. Suggestions are given as to how this particular parameter of the adaptive interference system could be optimized.

  8. Program Aplikasi Steganografi Menggunakan Metode Spread Spectrum pada Perangkat Mobile Berbasis Android

    Directory of Open Access Journals (Sweden)

    Rojali Rojali

    2012-12-01

    Full Text Available The exchange of traffic information in cyberspace grows fast. In all areas of life utilize technology to exchange information. One of the media owned by many people is mobile device such as mobile phone and tablet computer. In fact many people have been using mobile devices for information exchange function, and expect information to be transmitted quickly, accurately, and safely. The information security sent will be very important when the information is confidential. One way to secure information sent is the concealment of information into a media so that information hidden is beyond recognition by the human senses, which iscommonly referred to steganography. This research studied and implemented steganography using spread spectrum Method on Android-based mobile devices. The results showed that the inserted image before and after the message was inserted is not different with PSNR value of about 75.

  9. A Remote Direct Sequence Spread Spectrum Communications Lab Utilising the Emona DATEx

    Directory of Open Access Journals (Sweden)

    Cosmas Mwikirize

    2012-12-01

    Full Text Available Remote labs have become popular learning aids due to their versatility and considerable ease of utilisation as compared to their physical counterparts. At Makerere University, the remote labs are based on the standard Massachusetts Institute of Technology (MIT iLabs Shared Architecture (ISA - a scalable and generic platform. Presented in this paper is such a lab, addressing the key practical aspects of Direct Sequence Spread Spectrum (DSSS communication. The lab is built on the National Instruments Educational Laboratory Virtual Instrumentation Suite (NI ELVIS with the Emona Digital and Analog Telecommunications Experimenter (DATEx add-on board. It also incorporates switching hardware. The lab facilitates real-time control of the equipment, with users able to set, manipulate and observe signal parameters in both the frequency and the time domains. Simulation and data Acquisition modes of the experiment are supported to provide a richer learning experience.

  10. Novel power saving architecture for FBG based OCDMA code generation

    Science.gov (United States)

    Osadola, Tolulope B.; Idris, Siti K.; Glesk, Ivan

    2013-10-01

    A novel architecture for generating incoherent, 2-dimensional wavelength hopping-time spreading optical CDMA codes is presented. The architecture is designed to facilitate the reuse of optical source signal that is unused after an OCDMA code has been generated using fiber Bragg grating based encoders. Effective utilization of available optical power is therefore achieved by cascading several OCDMA encoders thereby enabling 3dB savings in optical power.

  11. Spread spectrum image data hiding in the encrypted discrete cosine transform coefficients

    Science.gov (United States)

    Zhang, Xiaoqiang; Wang, Z. Jane

    2013-10-01

    Digital watermarking and data hiding are important tools for digital rights protection of media data. Spread spectrum (SS)-based watermarking and data-hiding approaches are popular due to their outstanding robustness, but their security might not be sufficient. To improve the security of SS, a SS-based image data-hiding approach is proposed by encrypting the discrete cosine transform coefficients of the host image with the piecewise linear chaotic map, before the operation of watermark embedding. To evaluate the performance of the proposed approach, simulations and analyses of its robustness and security are carried out. The average bit-error-rate values on 100 real images from the Berkeley segmentation dataset under the JPEG compression, additive Gaussian noise, salt and pepper noise, and cropping attacks are reported. Experimental results show that the proposed approach can maintain the high robustness of traditional SS schemes and, meanwhile, also improve the security. The proposed approach can extend the key space of traditional SS schemes from 10 to 10 and thus can resist brute-force attack and unauthorized detection watermark attack.

  12. TPASS: a gamma-ray spectrum analysis and isotope identification computer code

    International Nuclear Information System (INIS)

    Dickens, J.K.

    1981-03-01

    The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given

  13. Interference Cancellation Technique Based on Discovery of Spreading Codes of Interference Signals and Maximum Correlation Detection for DS-CDMA System

    Science.gov (United States)

    Hettiarachchi, Ranga; Yokoyama, Mitsuo; Uehara, Hideyuki

    This paper presents a novel interference cancellation (IC) scheme for both synchronous and asynchronous direct-sequence code-division multiple-access (DS-CDMA) wireless channels. In the DS-CDMA system, the multiple access interference (MAI) and the near-far problem (NFP) are the two factors which reduce the capacity of the system. In this paper, we propose a new algorithm that is able to detect all interference signals as an individual MAI signal by maximum correlation detection. It is based on the discovery of all the unknowing spreading codes of the interference signals. Then, all possible MAI patterns so called replicas are generated as a summation of interference signals. And the true MAI pattern is found by taking correlation between the received signal and the replicas. Moreover, the receiver executes MAI cancellation in a successive manner, removing all interference signals by single-stage. Numerical results will show that the proposed IC strategy, which alleviates the detrimental effect of the MAI and the near-far problem, can significantly improve the system performance. Especially, we can obtain almost the same receiving characteristics as in the absense of interference for asynchrnous system when received powers are equal. Also, the same performances can be seen under any received power state for synchronous system.

  14. Study of multiplication factor sensitivity to the spread of WWER spent fuel isotopics calculated by different codes

    International Nuclear Information System (INIS)

    Markova, L.

    2001-01-01

    As a sensitivity study the impact on the system reactivity was studied in the case that different calculational methodologies of spent fuel isotopic concentrations were used for WWER spent fuel inventory computations. The sets of isotopic concentrations obtained by calculations with different codes and libraries as a result of the CB2 international benchmark focused on WWER-440 burnup credit were used to show the spread of the calculated spent fuel system reactivity. Using the MCNP 4B code and changing the isotopics input data, the multiplication factor of an infinite array of the WWER-440 fuel pin cells was calculated. The evaluation of the results shows the sensitivity of the calculated reactivity to different calculational methodologies used for the spent fuel inventory computation. In the studied cases of the CB2 benchmark, the spread of the reference k-results relative to the mean was found less or about ±1% in spite of the fact that the data of isotopic concentrations were spread much more. (author)

  15. Unfolding code for neutron spectrometry based on neural nets technology

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)

    2012-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the {sup R}obust Design of Artificial Neural Networks Methodology{sup .} The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a {sup 6}Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  16. NULIF: neutron spectrum generator, few-group constant calculator, and fuel depletion code

    International Nuclear Information System (INIS)

    Wittkopf, W.A.; Tilford, J.M.; Andrews, J.B. II; Kirschner, G.; Hassan, N.M.; Colpo, P.N.

    1977-02-01

    The NULIF code generates a microgroup neutron spectrum and calculates spectrum-weighted few-group parameters for use in a spatial diffusion code. A wide variety of fuel cells, non-fuel cells, and fuel lattices, typical of PWR (or BWR) lattices, are treated. A fuel depletion routine and change card capability allow a broad range of problems to be studied. Coefficient variation with fuel burnup, fuel temperature change, moderator temperature change, soluble boron concentration change, burnable poison variation, and control rod insertion are readily obtained. Heterogeneous effects, including resonance shielding and thermal flux depressions, are treated. Coefficients are obtained for one thermal group and up to three epithermal groups. A special output routine writes the few-group coefficient data in specified format on an output tape for automated fitting in the PDQ07-HARMONY system of spatial diffusion-depletion codes

  17. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    Science.gov (United States)

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  18. High-SNR spectrum measurement based on Hadamard encoding and sparse reconstruction

    Science.gov (United States)

    Wang, Zhaoxin; Yue, Jiang; Han, Jing; Li, Long; Jin, Yong; Gao, Yuan; Li, Baoming

    2017-12-01

    The denoising capabilities of the H-matrix and cyclic S-matrix based on the sparse reconstruction, employed in the Pixel of Focal Plane Coded Visible Spectrometer for spectrum measurement are investigated, where the spectrum is sparse in a known basis. In the measurement process, the digital micromirror device plays an important role, which implements the Hadamard coding. In contrast with Hadamard transform spectrometry, based on the shift invariability, this spectrometer may have the advantage of a high efficiency. Simulations and experiments show that the nonlinear solution with a sparse reconstruction has a better signal-to-noise ratio than the linear solution and the H-matrix outperforms the cyclic S-matrix whether the reconstruction method is nonlinear or linear.

  19. Coded Ultrasound for Blood Flow Estimation Using Subband Processing

    DEFF Research Database (Denmark)

    Gran, Fredrik; Udesen, Jesper; Nielsen, Michael Bachamnn

    2008-01-01

    the excitation signal is broadband and has good spatial resolution after pulse compression. This means that time can be saved by using the same data for B-mode imaging and blood flow estimation. Two different coding schemes are used in this paper, Barker codes and Golay codes. The performance of the codes......This paper investigates the use of coded excitation for blood flow estimation in medical ultrasound. Traditional autocorrelation estimators use narrow-band excitation signals to provide sufficient signal-to-noise-ratio (SNR) and velocity estimation performance. In this paper, broadband coded...... signals are used to increase SNR, followed by subband processing. The received broadband signal is filtered using a set of narrow-band filters. Estimating the velocity in each of the bands and averaging the results yields better performance compared with what would be possible when transmitting a narrow...

  20. The Stories We Hear, the Stories We Tell What Can the Life of Jane Barker (1652-1732) Tell Us about Women's Leadership in Higher Education in the Twenty-First Century?

    Science.gov (United States)

    Wilson, Carol Shiner

    2009-01-01

    Jane Barker--poet, novelist, farm manager, student and practitioner of medical arts--was not allowed to attend university because she was a woman. Yet she was Oxford-educated in the most modern of medical theories of her time. By the end of her life, unmarried by choice, Barker was writing for pay under her own name in an emerging genre--the…

  1. Simulation of the spectrum (Co-60), Theratron Equinox, using the code Penelope

    International Nuclear Information System (INIS)

    Quispe V, N. Y.; Ballon P, C. I.; Vega R, J. L. J.; Santos F, C.

    2017-10-01

    Using the code Penelope (Penetration and Energy Loss of Positrons and Electrons) V. 2008, the spectrum of the Theratron Equinox cobalt unit, currently used at the Goyeneche Hospital in Arequipa (Peru), was obtained in the radiotherapy service. The Penmain program was used to obtain the spectrum that, together with the PENGEOM package included in the Penelope code, allowed to build complex structures with, in this case, the cobalt unit head essentially comprising the cobalt source and its collimators. The dose-to-depth percentage curves were also obtained in different sizes of irradiated fields of 5 x 5, 10 x 10 and 15 x 15 cm 2 for the cobalt spectrum obtained, in which is observed that there is greater dispersion for fields greater and more time of simulation was needed, being concordance of the results of the simulation, when comparing the experimentally obtained data of the dose with the ionization chamber in a water tank. The spectrum obtained was validated with the data of the ionization chamber in the determination of dose-to-depth percentage curves; it can be used as a reference to optimize the radiotherapy planning system in the simulation with equivalent body materials. (Author)

  2. Wavelength-encoding/temporal-spreading optical code division multiple-access system with in-fiber chirped moiré gratings.

    Science.gov (United States)

    Chen, L R; Smith, P W; de Sterke, C M

    1999-07-20

    We propose an optical code division multiple-access (OCDMA) system that uses in-fiber chirped moiré gratings (CMG's) for encoding and decoding of broadband pulses. In reflection the wavelength-selective and dispersive nature of CMG's can be used to implement wavelength-encoding/temporal-spreading OCDMA. We give examples of codes designed around the constraints imposed by the encoding devices and present numerical simulations that demonstrate the proposed concept.

  3. Crystal mosaic spread determination by slow neutron scattering

    International Nuclear Information System (INIS)

    Adib, M.; Naguib, K.; Abdel Kawy, A.; Ashry, A.; Abbas, Y.; Wahba, M.; Maayouf, M.A.

    1988-01-01

    A method has been established for determination of the crystal mosaic spread. The method is based on recording all neutron-reflected, under bragg condition, from a certain crystal plane. A computer code was developed especially in order to fit the measured wavelength's distribution of the reflected neutrons with the calculated one, assuming that the crystal mosaic spread has a Gaussian shape. The code accounts for the parameters of the time of flight spectrometer used during the present measurements, as well as divergence of the incident neutron beam. The developed method has been applied for determination of the mosaic spread of both zinc and pyrolytic graphite (P.G.) crystals. The mosaic spread values deduced from the present measurements, are 10'+-6' and 3.60 0 +-0.16 0 respectively for Zn and P.G. crystals

  4. Telescoping the origins of obesity to women's bodies: how gender inequalities are being squeezed out of Barker's hypothesis.

    Science.gov (United States)

    Warin, Megan; Moore, Vivienne; Zivkovic, Tanya; Davies, Michael

    2011-07-01

    This paper traces the genealogy of the Barker hypothesis and its intersections with popular representations of scientific discourses about pregnancy and maternal obesity. Drawing on Foucault's genealogical method, this study examines the historical 'descent' of the developmental origins of adult disease and its initial grounding in structural factors of gender inequality and low socioeconomic status. In the more recent reproductive medicine literature, Barker's hypothesis has been used to understand the causes and consequences of foetal over-nutrition and has shifted its focus from social determinants to individual, gendered bodies. The print media has gainfully employed this conceptualization of obesity and, in doing so, placed women, and mothers in particular, as causal agents in the reproduction of obesity across generations. Such a 'common sense' understanding of obesity production and reproduction means that both the scientific literature and the public understanding of science has inadvertently assisted in putting women forward as the transmitters of obesity across generations. This powerful telescoping of the origins of obesity to women's bodies and their appetites is in stark contrast to earlier foci on gender inequalities and changing women's circumstances.

  5. LPI Radar Waveform Recognition Based on Time-Frequency Distribution

    Directory of Open Access Journals (Sweden)

    Ming Zhang

    2016-10-01

    Full Text Available In this paper, an automatic radar waveform recognition system in a high noise environment is proposed. Signal waveform recognition techniques are widely applied in the field of cognitive radio, spectrum management and radar applications, etc. We devise a system to classify the modulating signals widely used in low probability of intercept (LPI radar detection systems. The radar signals are divided into eight types of classifications, including linear frequency modulation (LFM, BPSK (Barker code modulation, Costas codes and polyphase codes (comprising Frank, P1, P2, P3 and P4. The classifier is Elman neural network (ENN, and it is a supervised classification based on features extracted from the system. Through the techniques of image filtering, image opening operation, skeleton extraction, principal component analysis (PCA, image binarization algorithm and Pseudo–Zernike moments, etc., the features are extracted from the Choi–Williams time-frequency distribution (CWD image of the received data. In order to reduce the redundant features and simplify calculation, the features selection algorithm based on mutual information between classes and features vectors are applied. The superiority of the proposed classification system is demonstrated by the simulations and analysis. Simulation results show that the overall ratio of successful recognition (RSR is 94.7% at signal-to-noise ratio (SNR of −2 dB.

  6. Obtaining of primary rays of spectrum X codes Penelope and MCNP5; Obtencion del espectro primario de Rayos X con los codigos Penelope y MCNP5

    Energy Technology Data Exchange (ETDEWEB)

    Pozuelo, F.; Querol, A.; Gallardo, S.; Rodenas, J.; Verdu, G.

    2012-07-01

    In this case, used codes PENELOPE MCNP5, based on the Monte Carlo method for x-ray spectrum taking into account the characteristics of the x-ray tube. In order to achieve a greater fit of simulated by the theoretical spectrum. It carried out a sensitivity analysis of the parameters available in both codes. The obtaining of the simulated spectrum could lead to an improvement in quality control of the x-ray tube to incorporate it as a method complementary to techniques.

  7. A design of a wavelength-hopping time-spreading incoherent optical code division multiple access system

    International Nuclear Information System (INIS)

    Glesk, I.; Baby, V.

    2005-01-01

    We present the architecture and code design for a highly scalable, 2.5 Gb/s per user optical code division multiple access (OCDMA) system. The system is scalable to 100 potential and more than 10 simultaneous users, each with a bit error rate (BER) of less than 10 -9 . The system architecture uses a fast wavelength-hopping, time-spreading codes. Unlike frequency and phase sensitive coherent OCDMA systems, this architecture utilizes standard on off keyed optical pulses allocated in the time and wavelength dimensions. This incoherent OCDMA approach is compatible with existing WDM optical networks and utilizes off the shelf components. We discuss the novel optical subsystem design for encoders and decoders that enable the realization of a highly scalable incoherent OCDMA system with rapid reconfigurability. A detailed analysis of the scalability of the two dimensional code is presented and select network deployment architectures for OCDMA are discussed (Authors)

  8. Assessing the Watson-Barker Listening Test (WBLT)-Form C in Measuring Listening Comprehension of Post-Secondary Hispanic-American Students

    Science.gov (United States)

    Worthington, Debra L.; Keaton, Shaughan; Cook, John; Fitch-Hauser, Margaret; Powers, William G.

    2014-01-01

    The Watson-Barker Listening Test (WBLT) is one of the most popular measures of listening comprehension. However, participants in studies utilizing this scale have been almost exclusively Anglo-American. At the same time, previous research questions the psychometric properties of the test. This study addressed both of these issues by testing the…

  9. Iterative code for the reconstruction of the neutrons spectrum using the Bonner spheres

    International Nuclear Information System (INIS)

    Reyes H, A.; Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    The neutrons are the particles more difficult of detecting for their intrinsic nature. The absence of the neutrons charge makes that an interaction exists with the matter in a different way. The term radiation spectrometry can use to describe the measurement of the intensity of a radiation field with regard to the energy. The intensity distribution with relationship to the energy is commonly known as spectrum. A method to know the neutrons spectrum in the radiation fields to those that people are exposed is the use of the known system as spectrometry system of Bonner spheres, being the more used for the purposes of the radiological protection. The current interest in the electrons spectrometry has stimulated the development of several procedures to carry out the reconstruction of the spectra. During the last decades new codes have been developed such as BUNKIUT, Bums, Fruit, UMG, etc., however, these methods still present several inconveniences as the complexity in their use, the necessity of an expert user and a very near initial spectrum to the spectrum that is wanted to obtain. To solve the mentioned problems it was development the program NSDUAZ (Neutron Spectrometry and Dosimetry from Autonomous University of Zacatecas). The objective of the present work is to prove and to validate the code before mentioned making an analysis of likeness and differences and of advantages and disadvantages with relationship to the codes used at the present time. (Author)

  10. Simulation of spreading with solidification: assessment synthesis of Thema code

    Energy Technology Data Exchange (ETDEWEB)

    Spindler, B.; Veteau, J.M. [CEA Grenoble, Direction de l' Energie Nucleaire, Dept. de Technologie Nucleaire, Service d' Etudes Thermohydrauliques et Technologiques, 38 (France)

    2004-07-01

    After a presentation of the models included in THEMA code, which simulates the spreading of a fluid with solidification, the whole assessment calculations are presented. The first series concerns the comparison with analytical or numerical solutions: dam break, conduction for the heat transfer in the substrate, crust growth. The second series concerns the comparison with the CORINE isothermal tests (simulating fluid at low temperature). The third series concerns the CORINE tests with heat transfer. The fourth series concerns the tests with simulating materials at medium or high temperature (RIT, KATS). The fifth series concerns the tests with prototypical materials (COMAS, FARO, VULCANO). Finally the blind simulations of the ECOKATS tests are presented. All the calculations are performed with the same physical models (THEMA version 2.5), without any variable tuning parameter according to the test under consideration. Sensitivity studies concern the influence of the viscosity model in the solidification interval, and for the tests with prototypical materials the inlet temperature and the solid fraction. The relative difference between the calculated and measured spreading areas is generally less than 20 % except for the test with prototypical materials, for which the assessment is not easy due to the large experimental uncertainties. The level of validation of THEMA is considered as satisfactory, taking into account the required accuracy. (authors)

  11. Simulation of spreading with solidification: assessment synthesis of Thema code

    International Nuclear Information System (INIS)

    Spindler, B.; Veteau, J.M.

    2004-01-01

    After a presentation of the models included in THEMA code, which simulates the spreading of a fluid with solidification, the whole assessment calculations are presented. The first series concerns the comparison with analytical or numerical solutions: dam break, conduction for the heat transfer in the substrate, crust growth. The second series concerns the comparison with the CORINE isothermal tests (simulating fluid at low temperature). The third series concerns the CORINE tests with heat transfer. The fourth series concerns the tests with simulating materials at medium or high temperature (RIT, KATS). The fifth series concerns the tests with prototypical materials (COMAS, FARO, VULCANO). Finally the blind simulations of the ECOKATS tests are presented. All the calculations are performed with the same physical models (THEMA version 2.5), without any variable tuning parameter according to the test under consideration. Sensitivity studies concern the influence of the viscosity model in the solidification interval, and for the tests with prototypical materials the inlet temperature and the solid fraction. The relative difference between the calculated and measured spreading areas is generally less than 20 % except for the test with prototypical materials, for which the assessment is not easy due to the large experimental uncertainties. The level of validation of THEMA is considered as satisfactory, taking into account the required accuracy. (authors)

  12. Frequency spectrum analysis of 252Cf neutron source based on LabVIEW

    International Nuclear Information System (INIS)

    Mi Deling; Li Pengcheng

    2011-01-01

    The frequency spectrum analysis of 252 Cf Neutron source is an extremely important method in nuclear stochastic signal processing. Focused on the special '0' and '1' structure of neutron pulse series, this paper proposes a fast-correlation algorithm to improve the computational rate of the spectrum analysis system. And the multi-core processor technology is employed as well as multi-threaded programming techniques of LabVIEW to construct frequency spectrum analysis system of 252 Cf neutron source based on LabVIEW. It not only obtains the auto-correlation and cross correlation results, but also auto-power spectrum,cross-power spectrum and ratio of spectral density. The results show that: analysis tools based on LabVIEW improve the fast auto-correlation and cross correlation code operating efficiency about by 25% to 35%, also verify the feasibility of using LabVIEW for spectrum analysis. (authors)

  13. An imaging method of wavefront coding system based on phase plate rotation

    Science.gov (United States)

    Yi, Rigui; Chen, Xi; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2018-01-01

    Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the point spread function of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.

  14. QR code optical encryption using spatially incoherent illumination

    Science.gov (United States)

    Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.; Starikov, R. S.

    2017-02-01

    Optical encryption is an actively developing field of science. The majority of encryption techniques use coherent illumination and suffer from speckle noise, which severely limits their applicability. The spatially incoherent encryption technique does not have this drawback, but its effectiveness is dependent on the Fourier spectrum properties of the image to be encrypted. The application of a quick response (QR) code in the capacity of a data container solves this problem, and the embedded error correction code also enables errorless decryption. The optical encryption of digital information in the form of QR codes using spatially incoherent illumination was implemented experimentally. The encryption is based on the optical convolution of the image to be encrypted with the kinoform point spread function, which serves as an encryption key. Two liquid crystal spatial light modulators were used in the experimental setup for the QR code and the kinoform imaging, respectively. The quality of the encryption and decryption was analyzed in relation to the QR code size. Decryption was conducted digitally. The successful decryption of encrypted QR codes of up to 129  ×  129 pixels was demonstrated. A comparison with the coherent QR code encryption technique showed that the proposed technique has a signal-to-noise ratio that is at least two times higher.

  15. A Predictive Coding Account of Psychotic Symptoms in Autism Spectrum Disorder

    Science.gov (United States)

    van Schalkwyk, Gerrit I.; Volkmar, Fred R.; Corlett, Philip R.

    2017-01-01

    The co-occurrence of psychotic and autism spectrum disorder (ASD) symptoms represents an important clinical challenge. Here we consider this problem in the context of a computational psychiatry approach that has been applied to both conditions--predictive coding. Some symptoms of schizophrenia have been explained in terms of a failure of top-down…

  16. The VULCANO spreading programme

    Energy Technology Data Exchange (ETDEWEB)

    Cognet, G.; Laffont, G.; Jegou, C.; Journeau, C.; Sudreau, F.; Pierre, J.; Ramacciotti, M. [CEA (Atomic Energy Commission), DRN/DER - Bat. 212, CEA Cadarache, 13108 St. Paul Lez Durance (France)

    1999-07-01

    Among the currently studied core-catcher projects, some of them suppose corium spreading before cooling, in particular the EPR (European Pressurized Reactor) core-catcher concept is based on mixing the corium with a special concrete, spreading the molten mixture on a large multi-layer surface cooled from the bottom and subsequently cooling by flooding with water. Therefore, melt spreading deserves intensive investigation in order to determine and quantify key phenomena which govern the stopping of spreading. In France, for some years, the Nuclear Reactor Division of the Atomic Energy Commission (CEA/DRN) has undertaken a large program to improve knowledge on corium behaviour and coolability. This program is based on experimental and theoretical investigations which are finally gathered in scenario and mechanistic computer codes. In this framework, the real material experimental programme, VULCANO, conducted within an European frame, is currently devoted to the study of corium spreading. In 1997 and 1998, several tests have been performed on dry corium spreading with various composition of melts. Although all the observed phenomena, in particular the differences between simulant and real material melts have not been yet totally explained, these tests have already provided a lot of information about: The behaviour of complex mixtures including refractory oxides, silica, iron oxides and in one case iron metal; Spreading progression, which was never stopped in any of these tests by a crust formation at the front; The structure of spread melts (porosity, crusts,...); Physico-chemical interaction between melt and the refractory substratum which was composed of zirconia bricks. (authors)

  17. The VULCANO spreading programme

    International Nuclear Information System (INIS)

    Cognet, G.; Laffont, G.; Jegou, C.; Journeau, C.; Sudreau, F.; Pierre, J.; Ramacciotti, M.

    1999-01-01

    Among the currently studied core-catcher projects, some of them suppose corium spreading before cooling, in particular the EPR (European Pressurized Reactor) core-catcher concept is based on mixing the corium with a special concrete, spreading the molten mixture on a large multi-layer surface cooled from the bottom and subsequently cooling by flooding with water. Therefore, melt spreading deserves intensive investigation in order to determine and quantify key phenomena which govern the stopping of spreading. In France, for some years, the Nuclear Reactor Division of the Atomic Energy Commission (CEA/DRN) has undertaken a large program to improve knowledge on corium behaviour and coolability. This program is based on experimental and theoretical investigations which are finally gathered in scenario and mechanistic computer codes. In this framework, the real material experimental programme, VULCANO, conducted within an European frame, is currently devoted to the study of corium spreading. In 1997 and 1998, several tests have been performed on dry corium spreading with various composition of melts. Although all the observed phenomena, in particular the differences between simulant and real material melts have not been yet totally explained, these tests have already provided a lot of information about: The behaviour of complex mixtures including refractory oxides, silica, iron oxides and in one case iron metal; Spreading progression, which was never stopped in any of these tests by a crust formation at the front; The structure of spread melts (porosity, crusts,...); Physico-chemical interaction between melt and the refractory substratum which was composed of zirconia bricks. (authors)

  18. Multiple optical code-label processing using multi-wavelength frequency comb generator and multi-port optical spectrum synthesizer.

    Science.gov (United States)

    Moritsuka, Fumi; Wada, Naoya; Sakamoto, Takahide; Kawanishi, Tetsuya; Komai, Yuki; Anzai, Shimako; Izutsu, Masayuki; Kodate, Kashiko

    2007-06-11

    In optical packet switching (OPS) and optical code division multiple access (OCDMA) systems, label generation and processing are key technologies. Recently, several label processors have been proposed and demonstrated. However, in order to recognize N different labels, N separate devices are required. Here, we propose and experimentally demonstrate a large-scale, multiple optical code (OC)-label generation and processing technology based on multi-port, a fully tunable optical spectrum synthesizer (OSS) and a multi-wavelength electro-optic frequency comb generator. The OSS can generate 80 different OC-labels simultaneously and can perform 80-parallel matched filtering. We also demonstrated its application to OCDMA.

  19. Combating QR-Code-Based Compromised Accounts in Mobile Social Networks.

    Science.gov (United States)

    Guo, Dong; Cao, Jian; Wang, Xiaoqi; Fu, Qiang; Li, Qiang

    2016-09-20

    Cyber Physical Social Sensing makes mobile social networks (MSNs) popular with users. However, such attacks are rampant as malicious URLs are spread covertly through quick response (QR) codes to control compromised accounts in MSNs to propagate malicious messages. Currently, there are generally two types of methods to identify compromised accounts in MSNs: one type is to analyze the potential threats on wireless access points and the potential threats on handheld devices' operation systems so as to stop compromised accounts from spreading malicious messages; the other type is to apply the method of detecting compromised accounts in online social networks to MSNs. The above types of methods above focus neither on the problems of MSNs themselves nor on the interaction of sensors' messages, which leads to the restrictiveness of platforms and the simplification of methods. In order to stop the spreading of compromised accounts in MSNs effectively, the attacks have to be traced to their sources first. Through sensors, users exchange information in MSNs and acquire information by scanning QR codes. Therefore, analyzing the traces of sensor-related information helps to identify the compromised accounts in MSNs. This paper analyzes the diversity of information sending modes of compromised accounts and normal accounts, analyzes the regularity of GPS (Global Positioning System)-based location information, and introduces the concepts of entropy and conditional entropy so as to construct an entropy-based model based on machine learning strategies. To achieve the goal, about 500,000 accounts of Sina Weibo and about 100 million corresponding messages are collected. Through the validation, the accuracy rate of the model is proved to be as high as 87.6%, and the false positive rate is only 3.7%. Meanwhile, the comparative experiments of the feature sets prove that sensor-based location information can be applied to detect the compromised accounts in MSNs.

  20. Combating QR-Code-Based Compromised Accounts in Mobile Social Networks

    Directory of Open Access Journals (Sweden)

    Dong Guo

    2016-09-01

    Full Text Available Cyber Physical Social Sensing makes mobile social networks (MSNs popular with users. However, such attacks are rampant as malicious URLs are spread covertly through quick response (QR codes to control compromised accounts in MSNs to propagate malicious messages. Currently, there are generally two types of methods to identify compromised accounts in MSNs: one type is to analyze the potential threats on wireless access points and the potential threats on handheld devices’ operation systems so as to stop compromised accounts from spreading malicious messages; the other type is to apply the method of detecting compromised accounts in online social networks to MSNs. The above types of methods above focus neither on the problems of MSNs themselves nor on the interaction of sensors’ messages, which leads to the restrictiveness of platforms and the simplification of methods. In order to stop the spreading of compromised accounts in MSNs effectively, the attacks have to be traced to their sources first. Through sensors, users exchange information in MSNs and acquire information by scanning QR codes. Therefore, analyzing the traces of sensor-related information helps to identify the compromised accounts in MSNs. This paper analyzes the diversity of information sending modes of compromised accounts and normal accounts, analyzes the regularity of GPS (Global Positioning System-based location information, and introduces the concepts of entropy and conditional entropy so as to construct an entropy-based model based on machine learning strategies. To achieve the goal, about 500,000 accounts of Sina Weibo and about 100 million corresponding messages are collected. Through the validation, the accuracy rate of the model is proved to be as high as 87.6%, and the false positive rate is only 3.7%. Meanwhile, the comparative experiments of the feature sets prove that sensor-based location information can be applied to detect the compromised accounts in MSNs.

  1. Effect of beat noise on the performance of two-dimensional time-spreading/wavelength-hopping optical code-division multiple-access systems

    Science.gov (United States)

    Bazan, T.; Harle, D.; Andonovic, I.; Meenakshi, M.

    2005-03-01

    The effect of beat noise on optical code-division multiple-access (OCDMA) systems using a range of two-dimensional (2-D) time-spreading/wavelength-hopping (TW) code families is presented. A derivation of a general formula for the error probability of the system is given. The properties of the 2-D codes--namely, the structure, length, and cross-correlation characteristics--are found to have a great influence on system performance. Improved performance can be obtained by use of real-time dynamic thresholding.

  2. Topology-selective jamming of fully-connected, code-division random-access networks

    Science.gov (United States)

    Polydoros, Andreas; Cheng, Unjeng

    1990-01-01

    The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.

  3. MIMO Based Eigen-Space Spreading

    National Research Council Canada - National Science Library

    Eltawil, Ahmed

    2004-01-01

    .... Combination of this powerful technique with orthogonal frequency division multiplexing (OFDM) based modulation and traditional time and frequency spreading techniques results in a highly secure mode of communications...

  4. Flexible digital signal processing architecture for narrowband and spread-spectrum lock-in detection in multiphoton microscopy and time-resolved spectroscopy.

    Science.gov (United States)

    Wilson, Jesse W; Park, Jong Kang; Warren, Warren S; Fischer, Martin C

    2015-03-01

    The lock-in amplifier is a critical component in many different types of experiments, because of its ability to reduce spurious or environmental noise components by restricting detection to a single frequency and phase. One example application is pump-probe microscopy, a multiphoton technique that leverages excited-state dynamics for imaging contrast. With this application in mind, we present here the design and implementation of a high-speed lock-in amplifier on the field-programmable gate array (FPGA) coprocessor of a data acquisition board. The most important advantage is the inherent ability to filter signals based on more complex modulation patterns. As an example, we use the flexibility of the FPGA approach to enable a novel pump-probe detection scheme based on spread-spectrum communications techniques.

  5. Design of a Multi-Spectrum CANDU-based Reactor, MSCR, with 37-element fuel bundles using SERPENT code

    International Nuclear Information System (INIS)

    Hussein, M.S.; Bonin, H.W.; Lewis, B.J.; Chan, P.

    2015-01-01

    The burning of highly-enriched uranium and plutonium from dismantled nuclear warhead material in the new design nuclear power plants represents an important step towards nonproliferation. The blending of these highly enriched uranium and plutonium with with uranium dioxide from the spent fuel of CANDU reactors, or mixing it with depleted uranium would need a very long time to dispose of this material. Consequently, considering that more efficient transmutation of actinides occurs in fast neutron reactors, a novel Multi-Spectrum CANDU Reactor, has been designed on the basis of the CANDU6 reactor with two concentric regions. The simulations of the MSCR were carried out using the SERPENT code. The inner or fast neutron spectrum core is fuelled by different levels of enriched uranium oxides. The helium is used as a coolant in the fast neutron core. The outer or the thermal neutron spectrum core is fuelled with natural uranium with heavy water as both moderator and coolant. Both cores use 37- element fuel bundles. The size of the two cores and the percentage level of enrichment of the fresh fuel in the fast core were optimized according to the criticality safety of the whole reactor. The excess reactivity, the regeneration factor, radial and axial flux shapes of the MSCR reactor were calculated at different of the concentration of fissile isotope 235 U of uranium fuel at the fast neutron spectrum core. The effect of variation of the concentration of the fissile isotope on the fluxes in both cores at each energy bin has been studied. (author)

  6. Design of a Multi-Spectrum CANDU-based Reactor, MSCR, with 37-element fuel bundles using SERPENT code

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, M.S.; Bonin, H.W.; Lewis, B.J.; Chan, P., E-mail: mohamed.hussein@rmc.ca, E-mail: bonin-h@rmc.ca, E-mail: lewis-b@rmc.ca, E-mail: Paul.Chan@rmc.ca [Royal Military College of Canada, Dept. of Chemistry and Chemical Engineering, Kingston, ON (Canada)

    2015-07-01

    The burning of highly-enriched uranium and plutonium from dismantled nuclear warhead material in the new design nuclear power plants represents an important step towards nonproliferation. The blending of these highly enriched uranium and plutonium with with uranium dioxide from the spent fuel of CANDU reactors, or mixing it with depleted uranium would need a very long time to dispose of this material. Consequently, considering that more efficient transmutation of actinides occurs in fast neutron reactors, a novel Multi-Spectrum CANDU Reactor, has been designed on the basis of the CANDU6 reactor with two concentric regions. The simulations of the MSCR were carried out using the SERPENT code. The inner or fast neutron spectrum core is fuelled by different levels of enriched uranium oxides. The helium is used as a coolant in the fast neutron core. The outer or the thermal neutron spectrum core is fuelled with natural uranium with heavy water as both moderator and coolant. Both cores use 37- element fuel bundles. The size of the two cores and the percentage level of enrichment of the fresh fuel in the fast core were optimized according to the criticality safety of the whole reactor. The excess reactivity, the regeneration factor, radial and axial flux shapes of the MSCR reactor were calculated at different of the concentration of fissile isotope {sup 235}U of uranium fuel at the fast neutron spectrum core. The effect of variation of the concentration of the fissile isotope on the fluxes in both cores at each energy bin has been studied. (author)

  7. Frequency spectrum might act as communication code between retina and visual cortex I.

    Science.gov (United States)

    Yang, Xu; Gong, Bo; Lu, Jian-Wei

    2015-01-01

    To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1.

  8. Digital Watermarks Using Discrete Wavelet Transformation and Spectrum Spreading

    Directory of Open Access Journals (Sweden)

    Ryousuke Takai

    2003-12-01

    Full Text Available In recent tears, digital media makes rapid progress through the development of digital technology. Digital media normally assures fairly high quality, nevertheless can be easily reproduced in a perfect form. This perfect reproducibility takes and advantage from a certain point of view, while it produces an essential disadvantage, since digital media is frequently copied illegally. Thus the problem of the copyright protection becomes a very important issue. A solution of this problem is to embed digital watermarks that is not perceived clearly by usual people, but represents the proper right of original product. In our method, the images data in the frequency domain are transformed by the Discrete Wavelet Transform and analyzed by the multi resolution approximation, [1]. Further, the spectrum spreading is executed by using PN-sequences. Choi and Aizawa [7] embed watermarks by using block correlation of DCT coefficients. Thus, we apply Discrete Cosine Transformation, abbreviated to DCT, instead of the Fourier transformation in order to embed watermarks.If the value of this variance is high then we decide that the block has bigger magnitude for visual fluctuations. Henceforth, we may embed stronger watermarks, which gives resistance for images processing, such as attacks and/or compressions.

  9. An Auto sequence Code to Integrate a Neutron Unfolding Code with thePC-MCA Accuspec

    International Nuclear Information System (INIS)

    Darsono

    2000-01-01

    In a neutron spectrometry using proton recoil method, the neutronunfolding code is needed to unfold the measured proton spectrum to become theneutron spectrum. The process of the unfolding neutron in the existingneutron spectrometry which was successfully installed last year was doneseparately. This manuscript reports that the auto sequence code to integratethe neutron unfolding code UNFSPEC.EXE with the software facility of thePC-MCA Accuspec has been made and run successfully so that the new neutronspectrometry become compact. The auto sequence code was written based on therules in application program facility of PC-MCA Accuspec and then it wascompiled using AC-EXE. Result of the test of the auto sequence code showedthat for binning width 20, 30, and 40 giving a little different spectrumshape. The binning width around 30 gives a better spectrum in mean of givingsmall error compared to the others. (author)

  10. Burnup code for fuel assembly by Monte Carlo code. MKENO-BURN

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Suyama, Kenya; Masukawa, Fumihiro; Matsumoto, Kiyoshi; Kurosawa, Masayoshi; Kaneko, Toshiyuki.

    1996-12-01

    The evaluation of neutron spectrum is so important for burnup calculation of the heterogeneous geometry like recent BWR fuel assembly. MKENO-BURN is a multi dimensional burnup code that based on the three dimensional monte carlo neutron transport code 'MULTI-KENO' and the routine for the burnup calculation of the one dimensional burnup code 'UNITBURN'. MKENO-BURN analyzes the burnup problem of arbitrary regions after evaluating the neutron spectrum and making one group cross section in three dimensional geometry with MULTI-KENO. It enables us to do three dimensional burnup calculation. This report consists of general description of MKENO-BURN and the input data. (author)

  11. Burnup code for fuel assembly by Monte Carlo code. MKENO-BURN

    Energy Technology Data Exchange (ETDEWEB)

    Naito, Yoshitaka; Suyama, Kenya; Masukawa, Fumihiro; Matsumoto, Kiyoshi; Kurosawa, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Toshiyuki

    1996-12-01

    The evaluation of neutron spectrum is so important for burnup calculation of the heterogeneous geometry like recent BWR fuel assembly. MKENO-BURN is a multi dimensional burnup code that based on the three dimensional monte carlo neutron transport code `MULTI-KENO` and the routine for the burnup calculation of the one dimensional burnup code `UNITBURN`. MKENO-BURN analyzes the burnup problem of arbitrary regions after evaluating the neutron spectrum and making one group cross section in three dimensional geometry with MULTI-KENO. It enables us to do three dimensional burnup calculation. This report consists of general description of MKENO-BURN and the input data. (author)

  12. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  13. Frequency spectrum might act as communication code between retina and visual cortex I

    Directory of Open Access Journals (Sweden)

    Xu Yang

    2015-12-01

    Full Text Available AIM: To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1. METHODS: Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG and pattern visually evoked potential (PVEP and fast Fourier transform has been used to analyze the frequency components of those signals. RESULTS: The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. CONCLUSION: The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1.

  14. Data Entry Skills in a Computer-based Spread Sheet Amongst Postgraduate Medical Students: A Simulation Based Descriptive Assessment.

    Science.gov (United States)

    Khan, Amir Maroof; Shah, Dheeraj; Chatterjee, Pranab

    2014-07-01

    In India, research work in the form of a thesis is a mandatory requirement for the postgraduate (PG) medical students. Data entry in a computer-based spread sheet is one of the important basic skills for research, which has not yet been studied. This study was conducted to assess the data entry skills of the 2(nd) year PG medical students of a medical college of North India. A cross-sectional, descriptive study was conducted among 111 second year PG students by using four simulated filled case record forms and a computer-based spread sheet in which data entry was to be carried out. On a scale of 0-10, only 17.1% of the students scored more than seven. The specific sub-skills that were found to be lacking in more than half of the respondents were as follows: Inappropriate coding (93.7%), long variable names (51.4%), coding not being done for all the variables (76.6%), missing values entered in a non-uniform manner (84.7%) and two variables entered in the same column in the case of blood pressure reading (80.2%). PG medical students were not found to be proficient in data entry skill and this can act as a barrier to do research. This being a first of its kind study in India, more research is needed to understand this issue and then include this yet neglected aspect in teaching research methodology to the medical students.

  15. Novel water-based antiseptic lotion demonstrates rapid, broad-spectrum kill compared with alcohol antiseptic.

    Science.gov (United States)

    Czerwinski, Steven E; Cozean, Jesse; Cozean, Colette

    2014-01-01

    A novel alcohol-based antiseptic and a novel water-based antiseptic lotion, both with a synergistic combination of antimicrobial ingredients containing 0.2% benzethonium chloride, were evaluated using the standard time-kill method against 25 FDA-specified challenge microorganisms. The purpose of the testing was to determine whether a non-alcohol product could have equivalent rapid and broad-spectrum kill to a traditional alcohol sanitizer. Both the alcohol- and water-based products showed rapid and broad-spectrum antimicrobial activity. The average 15-s kill was 99.999% of the challenge organism for the alcohol-based antiseptic and 99.971% for the water-based antiseptic. The alcohol-based product demonstrated 100% of peak efficacy (60s) within the first 15s, whereas the water-based product showed 99.97%. The novel alcohol-based antiseptic reduced concentrations of 100% of organisms by 99.999%, whereas the water-based antiseptic lotion showed the same reduction for 96% of organisms. A novel water-based antiseptic product demonstrated equivalent rapid, broad-spectrum antimicrobial activity to an alcohol-based sanitizer and provided additional benefits of reduced irritation, persistent effect, and greater efficacy against common viruses. The combination of rapid, broad-spectrum immediate kill and persistent efficacy against pathogens may have significant clinical benefit in limiting the spread of disease. Copyright © 2014 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  16. SPEXTRA: Optimal extraction code for long-slit spectra in crowded fields

    Science.gov (United States)

    Sarkisyan, A. N.; Vinokurov, A. S.; Solovieva, Yu. N.; Sholukhova, O. N.; Kostenkov, A. E.; Fabrika, S. N.

    2017-10-01

    We present a code for the optimal extraction of long-slit 2D spectra in crowded stellar fields. Its main advantage and difference from the existing spectrum extraction codes is the presence of a graphical user interface (GUI) and a convenient visualization system of data and extraction parameters. On the whole, the package is designed to study stars in crowded fields of nearby galaxies and star clusters in galaxies. Apart from the spectrum extraction for several stars which are closely located or superimposed, it allows the spectra of objects to be extracted with subtraction of superimposed nebulae of different shapes and different degrees of ionization. The package can also be used to study single stars in the case of a strong background. In the current version, the optimal extraction of 2D spectra with an aperture and the Gaussian function as PSF (point spread function) is proposed. In the future, the package will be supplemented with the possibility to build a PSF based on a Moffat function. We present the details of GUI, illustrate main features of the package, and show results of extraction of the several interesting spectra of objects from different telescopes.

  17. The potential role of microbiota for controlling the spread of extended-spectrum beta-lactamase-producing Enterobacteriaceae (ESBL-PE in neonatal population [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Thibaud Delerue

    2017-07-01

    Full Text Available The spread of extended-spectrum beta-lactamase-producing Enterobacteriaceae (ESBL-PE in the hospital and also the community is worrisome. Neonates particularly are exposed to the risk of ESBL-PE acquisition and, owing to the immaturity of their immune system, to a higher secondary risk of ESBL-PE-related infection. Reducing the risk of acquisition in the hospital is usually based on a bundle of measures, including screening policies at admission, improving hand hygiene compliance, and decreasing antibiotic consumption. However, recent scientific data suggest new prevention opportunities based on microbiota modifications.

  18. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  19. Single integrated device for optical CDMA code processing in dual-code environment.

    Science.gov (United States)

    Huang, Yue-Kai; Glesk, Ivan; Greiner, Christoph M; Iazkov, Dmitri; Mossberg, Thomas W; Wang, Ting; Prucnal, Paul R

    2007-06-11

    We report on the design, fabrication and performance of a matching integrated optical CDMA encoder-decoder pair based on holographic Bragg reflector technology. Simultaneous encoding/decoding operation of two multiple wavelength-hopping time-spreading codes was successfully demonstrated and shown to support two error-free OCDMA links at OC-24. A double-pass scheme was employed in the devices to enable the use of longer code length.

  20. Fire-safety engineering and performance-based codes

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    project administrators, etc. The book deals with the following topics: • Historical presentation on the subject of fire • Legislation and building project administration • European fire standardization • Passive and active fire protection • Performance-based Codes • Fire-safety Engineering • Fundamental......Fire-safety Engineering is written as a textbook for Engineering students at universities and other institutions of higher education that teach in the area of fire. The book can also be used as a work of reference for consulting engineers, Building product manufacturers, contractors, building...... thermodynamics • Heat exchange during the fire process • Skin burns • Burning rate, energy release rate and design fires • Proposal to Risk-based design fires • Proposal to a Fire scale • Material ignition and flame spread • Fire dynamics in buildings • Combustion products and toxic gases • Smoke inhalation...

  1. Data entry skills in a computer-based spread sheet amongst postgraduate medical students: A simulation based descriptive assessment

    Directory of Open Access Journals (Sweden)

    Amir Maroof Khan

    2014-01-01

    Full Text Available Background: In India, research work in the form of a thesis is a mandatory requirement for the postgraduate (PG medical students. Data entry in a computer-based spread sheet is one of the important basic skills for research, which has not yet been studied. This study was conducted to assess the data entry skills of the 2 nd year PG medical students of a medical college of North India. Materials and Methods: A cross-sectional, descriptive study was conducted among 111 second year PG students by using four simulated filled case record forms and a computer-based spread sheet in which data entry was to be carried out. Results: On a scale of 0-10, only 17.1% of the students scored more than seven. The specific sub-skills that were found to be lacking in more than half of the respondents were as follows: Inappropriate coding (93.7%, long variable names (51.4%, coding not being done for all the variables (76.6%, missing values entered in a non-uniform manner (84.7% and two variables entered in the same column in the case of blood pressure reading (80.2%. Conclusion: PG medical students were not found to be proficient in data entry skill and this can act as a barrier to do research. This being a first of its kind study in India, more research is needed to understand this issue and then include this yet neglected aspect in teaching research methodology to the medical students.

  2. Calculation of the fast neutron flux spectrum in the MNSR inner irradiation site using the WIMSD4 code

    International Nuclear Information System (INIS)

    Khattab, K.

    2005-03-01

    The Miniature Neutron Source Reactor (MNSR) in Syria has five inner irradiation sites in the annulus Beryllium reflectors to analyze the unknown samples using the Neutron Activation Analysis technique and to produce medium and short half life isotopes. The fast neutron flux spectrum has a special importance in the MNSR reactor physics where this spectrum is required to measure the fast neutron flux in the MNSR inner irradiation sites. Hence, calculation of the fast neutron flux spectrum in the MNSR inner irradiation site is conducted in this work using the WIMSD4 code. The energy range is divided in the WIMSD4 to 69 energy groups. The first six energy groups represent the fast neutron ranging from 0.5 to 10 MeV. To calculate the fast neutron flux spectrum in the MNSR inner irradiation site using the WIMSD4 code, the MNSR is modeled as a super unit cell. This cell consists of three regions which are: the homogenized core, annulus Beryllium, and water. The fast neutron spectrum is calculated also using the U 235 fission neutron spectrum approximation. The U 235 fission neutron spectrum agrees very good with the WIMSD4 results when neutron energy exceeds 1 MeV, but it fails when the neutron energy ranges from 0.5 to 1 MeV. The WIMSD4 code is used as well to calculate the microscopic fission cross sections for the U 238 using six energy groups where a unit cell of U 238 is used since the U 238 is usually used to measure the fast neutron flux in the reactor. The macroscopic fission cross sections for the U 238 are calculated first then the microscopic fission cross sections are calculated knowing the U 238 atomic density. (Author)

  3. Calculation of the fast neutron flux spectrum in the MNSR inner irradiation site using the WIMSD4 code

    International Nuclear Information System (INIS)

    Khattab, K.

    2006-01-01

    The Miniature Neutron Source Reactor (MNSR) in Syria has five inner irradiation sites in the annulus Beryllium reflectors to analyze the unknown samples using the Neutron Activation Analysis technique and to produce medium and short half life isotopes. The fast neutron flux spectrum has a special importance in the MNSR reactor physics where this spectrum is required to measure the fast neutron flux in the MNSR inner irradiation sites. Hence, calculation of the fast neutron flux spectrum in the MNSR inner irradiation site is conducted in this work using the WIMSD4 code. The energy range is divided in the WIMSD4 to 69 energy groups. The first six energy groups represent the fast neutron ranging from 0.5 to 10 MeV. To calculate the fast neutron flux spectrum in the MNSR inner irradiation site using the WIMSD4 code, the MNSR is modeled as a super unit cell. This cell consists of three regions which are: the homogenized core, annulus Beryllium, and water. The fast neutron spectrum is calculated also using the U 235 fission neutron spectrum approximation. The U 235 fission neutron spectrum agrees very good with the WIMSD4 results when neutron energy exceeds 1 MeV, but it fails when the neutron energy ranges from 0.5 to 1 MeV. The WIMSD4 code is used as well to calculate the microscopic fission cross sections for the U 238 using six energy groups where a unit cell of U 238 is used since the U 238 is usually used to measure the fast neutron flux in the reactor. The macroscopic fission cross sections for the U 238 are calculated first then the microscopic fission cross sections are calculated knowing the U 238 atomic density. (Author)

  4. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  5. A New Prime Code for Synchronous Optical Code Division Multiple-Access Networks

    Directory of Open Access Journals (Sweden)

    Huda Saleh Abbas

    2018-01-01

    Full Text Available A new spreading code based on a prime code for synchronous optical code-division multiple-access networks that can be used in monitoring applications has been proposed. The new code is referred to as “extended grouped new modified prime code.” This new code has the ability to support more terminal devices than other prime codes. In addition, it patches subsequences with “0s” leading to lower power consumption. The proposed code has an improved cross-correlation resulting in enhanced BER performance. The code construction and parameters are provided. The operating performance, using incoherent on-off keying modulation and incoherent pulse position modulation systems, has been analyzed. The performance of the code was compared with other prime codes. The results demonstrate an improved performance, and a BER floor of 10−9 was achieved.

  6. Optimal pseudorandom sequence selection for online c-VEP based BCI control applications

    DEFF Research Database (Denmark)

    Isaksen, Jonas L.; Mohebbi, Ali; Puthusserypady, Sadasivan

    2017-01-01

    to predict the chance of completion and accuracy score. Results: No specific pseudorandom sequence showed superior accuracy on the group basis. When isolating the individual performances with the highest accuracy, time consumption per identification was not significantly increased. The Accuracy Score aids...... is a laborious process. Aims: This study aimed to suggest an efficient method for choosing the optimal stimulus sequence based on a fast test and simple measures to increase the performance and minimize the time consumption for research trials. Methods: A total of 21 healthy subjects were included in an online...... wheelchair control task and completed the same task using stimuli based on the m-code, the gold-code, and the Barker-code. Correct/incorrect identification and time consumption were obtained for each identification. Subject-specific templates were characterized and used in a forward-step first-order model...

  7. Two-Dimensional Optical CDMA System Parameters Limitations for Wavelength Hopping/Time-Spreading Scheme based on Simulation Experiment

    Science.gov (United States)

    Kandouci, Chahinaz; Djebbari, Ali

    2018-04-01

    A new family of two-dimensional optical hybrid code which employs zero cross-correlation (ZCC) codes, constructed by the balanced incomplete block design BIBD, as both time-spreading and wavelength hopping patterns are used in this paper. The obtained codes have both off-peak autocorrelation and cross-correlation values respectively equal to zero and unity. The work in this paper is a computer experiment performed using Optisystem 9.0 software program as a simulator to determine the wavelength hopping/time spreading (WH/TS) OCDMA system performances limitations. Five system parameters were considered in this work: the optical fiber length (transmission distance), the bitrate, the chip spacing and the transmitted power. This paper shows for what sufficient system performance parameters (BER≤10-9, Q≥6) the system can stand for.

  8. Optical code-division multiple-access networks

    Science.gov (United States)

    Andonovic, Ivan; Huang, Wei

    1999-04-01

    This review details the approaches adopted to implement classical code division multiple access (CDMA) principles directly in the optical domain, resulting in all optical derivatives of electronic systems. There are a number of ways of realizing all-optical CDMA systems, classified as incoherent and coherent based on spreading in the time and frequency dimensions. The review covers the basic principles of optical CDMA (OCDMA), the nature of the codes used in these approaches and the resultant limitations on system performance with respect to the number of stations (code cardinality), the number of simultaneous users (correlation characteristics of the families of codes), concluding with consideration of network implementation issues. The latest developments will be presented with respect to the integration of conventional time spread codes, used in the bulk of the demonstrations of these networks to date, with wavelength division concepts, commonplace in optical networking. Similarly, implementations based on coherent correlation with the aid of a local oscillator will be detailed and comparisons between approaches will be drawn. Conclusions regarding the viability of these approaches allowing the goal of a large, asynchronous high capacity optical network to be realized will be made.

  9. Optimal, Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2002-01-01

    Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....

  10. A STUDY ON DETERMINING THE REFERENCE SPREADING SEQUENCES FOR A DS/CDMACOMMUNICATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Cebrail ÇİFTLİKLİ

    2002-02-01

    Full Text Available In a direct sequence/code division multiple access (DS/CDMA system, the role of the spreading sequences (codes is crucial since the multiple access interference (MAI is the main performance limitation. In this study, we propose an accurate criterion which enables the determination of the reference spreading codes which yield lower bit error rates (BER's in a given code set for a DS/CDMA system using despreading sequences weighted by stepping chip waveforms. The numerical results show that the spreading codes determined by the proposed criterion are the most suitable codes for using as references.

  11. The spreading of radiolabelled fatty suppository bases in the human rectum

    International Nuclear Information System (INIS)

    Sugito, Keiko; Ogata, Hiroyasu; Noguchi, Masahiro; Kogure, Takahashi; Takano, Masaaki; Maruyama, Yuzo; Sasaki, Yasuhito

    1988-01-01

    The purpose of this study was to develop a radiolabelling method for assessing the spreading of fatty suppository bases (Witepsol H-5, W-35 and S-55), and to apply this technique to the evaluation of suppository disposition in the human rectum. 99m/Tc was bound chemically to the bases Witepsol H-5 and W-35, and mixed physically with Witepsol S-55. The spreading of each suppository base was monitored by gamma-scintigraphy following rectal administration. The mean radioactivity remaining at the inserted region 4 h after administration was 44.2% of total activity. The mean perpendicular maximum spreading distance from this region was 7.7 cm on the scintigram near to the sigmoid colon. Defecation was suggested to be a factor influencing the spread of suppository bases. However, there was no clear relationship between the type of suppository base used and the extent of its spread within the rectum. 6 refs.; 4 figs.; 1 table

  12. Calculus of the Power Spectral Density of Ultra Wide Band Pulse Position Modulation Signals Coded with Totally Flipped Code

    Directory of Open Access Journals (Sweden)

    DURNEA, T. N.

    2009-02-01

    Full Text Available UWB-PPM systems were noted to have a power spectral density (p.s.d. consisting of a continuous portion and a line spectrum, which is composed of energy components placed at discrete frequencies. These components are the major source of interference to narrowband systems operating in the same frequency interval and deny harmless coexistence of UWB-PPM and narrowband systems. A new code denoted as Totally Flipped Code (TFC is applied to them in order to eliminate these discrete spectral components. The coded signal transports the information inside pulse position and will have the amplitude coded to generate a continuous p.s.d. We have designed the code and calculated the power spectral density of the coded signals. The power spectrum has no discrete components and its envelope is largely flat inside the bandwidth with a maximum at its center and a null at D.C. These characteristics make this code suited for implementation in the UWB systems based on PPM-type modulation as it assures a continuous spectrum and keeps PPM modulation performances.

  13. Double random phase spread spectrum spread space technique for secure parallel optical multiplexing with individual encryption key

    Science.gov (United States)

    Hennelly, B. M.; Javidi, B.; Sheridan, J. T.

    2005-09-01

    A number of methods have been recently proposed in the literature for the encryption of 2-D information using linear optical systems. In particular the double random phase encoding system has received widespread attention. This system uses two Random Phase Keys (RPK) positioned in the input spatial domain and the spatial frequency domain and if these random phases are described by statistically independent white noises then the encrypted image can be shown to be a white noise. Decryption only requires knowledge of the RPK in the frequency domain. The RPK may be implemented using a Spatial Light Modulators (SLM). In this paper we propose and investigate the use of SLMs for secure optical multiplexing. We show that in this case it is possible to encrypt multiple images in parallel and multiplex them for transmission or storage. The signal energy is effectively spread in the spatial frequency domain. As expected the number of images that can be multiplexed together and recovered without loss is proportional to the ratio of the input image and the SLM resolution. Many more images may be multiplexed with some loss in recovery. Furthermore each individual encryption is more robust than traditional double random phase encoding since decryption requires knowledge of both RPK and a lowpass filter in order to despread the spectrum and decrypt the image. Numerical simulations are presented and discussed.

  14. Model of fire spread around Krsko Power Plant

    International Nuclear Information System (INIS)

    Vidmar, P.; Petelin, S.

    2001-01-01

    The idea behind the article is how to define fire behaviour. The work is based on an analytical study of fire origin, its development and spread. The study is based on thermodynamics, heat transfer and the study of hydrodynamics and combustion, which represent the bases of fire dynamics. The article shows a practical example of a leak of hazardous chemicals from a tank. Because of the inflammability of the fluid, fire may start. We have tried to model fire propagation around the Krsko power plant, and show what extended surrounding area could be affected. The model also considers weather conditions, in particular wind speed and direction. For this purpose we have used the computer code Safer Trace, which is based on zone models. That means that phenomena are described by physical and empirical equations. An imperfection in this computer code is the inability to consider ground topology. However in the case of the Krsko power plant, topology is not so important, as the plan is located in a relatively flat region. Mathematical models are presented. They show the propagation of hazardous fluid in the environment considering meteorological data. The work also shows which data are essential to define fire spread and shows the main considerations of Probabilistic Safety Assessment for external fire event.(author)

  15. A Study on the Security Levels of Spread-Spectrum Embedding Schemes in the WOA Framework.

    Science.gov (United States)

    Wang, Yuan-Gen; Zhu, Guopu; Kwong, Sam; Shi, Yun-Qing

    2017-08-23

    Security analysis is a very important issue for digital watermarking. Several years ago, according to Kerckhoffs' principle, the famous four security levels, namely insecurity, key security, subspace security, and stego-security, were defined for spread-spectrum (SS) embedding schemes in the framework of watermarked-only attack. However, up to now there has been little application of the definition of these security levels to the theoretical analysis of the security of SS embedding schemes, due to the difficulty of the theoretical analysis. In this paper, based on the security definition, we present a theoretical analysis to evaluate the security levels of five typical SS embedding schemes, which are the classical SS, the improved SS (ISS), the circular extension of ISS, the nonrobust and robust natural watermarking, respectively. The theoretical analysis of these typical SS schemes are successfully performed by taking advantage of the convolution of probability distributions to derive the probabilistic models of watermarked signals. Moreover, simulations are conducted to illustrate and validate our theoretical analysis. We believe that the theoretical and practical analysis presented in this paper can bridge the gap between the definition of the four security levels and its application to the theoretical analysis of SS embedding schemes.

  16. A Dual-Channel Acquisition Method Based on Extended Replica Folding Algorithm for Long Pseudo-Noise Code in Inter-Satellite Links.

    Science.gov (United States)

    Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen

    2018-05-25

    Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection

  17. A Dual-Channel Acquisition Method Based on Extended Replica Folding Algorithm for Long Pseudo-Noise Code in Inter-Satellite Links

    Directory of Open Access Journals (Sweden)

    Hongbo Zhao

    2018-05-01

    Full Text Available Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR, complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS and BeiDou Navigation Satellite System (BDS adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST. This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher

  18. Population-based evaluation of a suggested anatomic and clinical classification of congenital heart defects based on the International Paediatric and Congenital Cardiac Code

    Directory of Open Access Journals (Sweden)

    Goffinet François

    2011-10-01

    Full Text Available Abstract Background Classification of the overall spectrum of congenital heart defects (CHD has always been challenging, in part because of the diversity of the cardiac phenotypes, but also because of the oft-complex associations. The purpose of our study was to establish a comprehensive and easy-to-use classification of CHD for clinical and epidemiological studies based on the long list of the International Paediatric and Congenital Cardiac Code (IPCCC. Methods We coded each individual malformation using six-digit codes from the long list of IPCCC. We then regrouped all lesions into 10 categories and 23 subcategories according to a multi-dimensional approach encompassing anatomic, diagnostic and therapeutic criteria. This anatomic and clinical classification of congenital heart disease (ACC-CHD was then applied to data acquired from a population-based cohort of patients with CHD in France, made up of 2867 cases (82% live births, 1.8% stillbirths and 16.2% pregnancy terminations. Results The majority of cases (79.5% could be identified with a single IPCCC code. The category "Heterotaxy, including isomerism and mirror-imagery" was the only one that typically required more than one code for identification of cases. The two largest categories were "ventricular septal defects" (52% and "anomalies of the outflow tracts and arterial valves" (20% of cases. Conclusion Our proposed classification is not new, but rather a regrouping of the known spectrum of CHD into a manageable number of categories based on anatomic and clinical criteria. The classification is designed to use the code numbers of the long list of IPCCC but can accommodate ICD-10 codes. Its exhaustiveness, simplicity, and anatomic basis make it useful for clinical and epidemiologic studies, including those aimed at assessment of risk factors and outcomes.

  19. Generalized eigenvalue based spectrum sensing

    KAUST Repository

    Shakir, Muhammad

    2012-01-01

    Spectrum sensing is one of the fundamental components in cognitive radio networks. In this chapter, a generalized spectrum sensing framework which is referred to as Generalized Mean Detector (GMD) has been introduced. In this context, we generalize the detectors based on the eigenvalues of the received signal covariance matrix and transform the eigenvalue based spectrum sensing detectors namely: (i) the Eigenvalue Ratio Detector (ERD) and two newly proposed detectors which are referred to as (ii) the GEometric Mean Detector (GEMD) and (iii) the ARithmetic Mean Detector (ARMD) into an unified framework of generalize spectrum sensing. The foundation of the proposed framework is based on the calculation of exact analytical moments of the random variables of the decision threshold of the respective detectors. The decision threshold has been calculated in a closed form which is based on the approximation of Cumulative Distribution Functions (CDFs) of the respective test statistics. In this context, we exchange the analytical moments of the two random variables of the respective test statistics with the moments of the Gaussian (or Gamma) distribution function. The performance of the eigenvalue based detectors is compared with the several traditional detectors including the energy detector (ED) to validate the importance of the eigenvalue based detectors and the performance of the GEMD and the ARMD particularly in realistic wireless cognitive radio network. Analytical and simulation results show that the newly proposed detectors yields considerable performance advantage in realistic spectrum sensing scenarios. Moreover, the presented results based on proposed approximation approaches are in perfect agreement with the empirical results. © 2012 Springer Science+Business Media Dordrecht.

  20. A comparison in the reconstruction of neutron spectrums using classical iterative techniques

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Gallego, E.

    2009-10-01

    One of the key drawbacks to the use of BUNKI code is that the process begins the reconstruction of the spectrum based on a priori knowledge as close as possible to the solution that is sought. The user has to specify the initial spectrum or do it through a subroutine called MAXIET to calculate a Maxwellian and a 1/E spectrum as initial spectrum. Because the application of iterative procedures by to resolve the reconstruction of neutron spectrum needs an initial spectrum, it is necessary to have new proposals for the election of the same. Based on the experience gained with a widely used method of reconstruction, called BUNKI, has developed a new computational tools for neutron spectrometry and dosimetry, which was first introduced, which operates by means of an iterative algorithm for the reconstruction of neutron spectra. The main feature of this tool is that unlike the existing iterative codes, the choice of the initial spectrum is performed automatically by the program, through a neutron spectra catalog. To develop the code, the algorithm was selected as the routine iterative SPUNIT be used in computing tool and response matrix UTA4 for 31 energy groups. (author)

  1. An alternative approach to spectrum base line estimation

    International Nuclear Information System (INIS)

    Bukvic, S.; Spasojevic, Dj.

    2005-01-01

    We present a new form of merit function which measures agreement between a large number of data and the model function with a particular choice of parameters. We demonstrate the efficiency of the proposed merit function on the common problem of finding the base line of a spectrum. When the base line is expected to be a horizontal straight line, the use of minimization algorithms is not necessary, i.e. the solution is achieved in a small number of steps. We discuss the advantages of the proposed merit function in general, when explicit use of a minimization algorithm is necessary. The hardcopy text is accompanied by an electronic archive, stored on the SAE homepage at http://www1.elsevier.com/homepage/saa/sab/content/lower.htm. The archive contains fully functional demo program with tutorial, examples and Visual Basic source code of the key subroutine

  2. The Joseph Barker, Jr. Home: A Comparative Architectural and Historical Study of a 19th Century Brick and Frame Dwelling in Washington County, Ohio,

    Science.gov (United States)

    1981-02-01

    anti-slavery debates of 1836 (Williams 1881: 430) and was deeply involved in Washington County politics. At the arrival of John Quincy Adams during a...433) continued with Adams up the Ohio River as far as Pittsburgh. Barker was a frequent participant in political discussions at the store 29 of... Ansel Wood and ca. 1819 by Timothy Love. Patton (1936: 30), on the other hand, dated the construction to ca. 1832-1836 and stated that it was built

  3. Multiple Beta Spectrum Analysis Method Based on Spectrum Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Uk Jae; Jung, Yun Song; Kim, Hee Reyoung [UNIST, Ulsan (Korea, Republic of)

    2016-05-15

    When the sample of several mixed radioactive nuclides is measured, it is difficult to divide each nuclide due to the overlapping of spectrums. For this reason, simple mathematical analysis method for spectrum analysis of the mixed beta ray source has been studied. However, existing research was in need of more accurate spectral analysis method as it has a problem of accuracy. The study will describe the contents of the separation methods of the mixed beta ray source through the analysis of the beta spectrum slope based on the curve fitting to resolve the existing problem. The fitting methods including It was understood that sum of sine fitting method was the best one of such proposed methods as Fourier, polynomial, Gaussian and sum of sine to obtain equation for distribution of mixed beta spectrum. It was shown to be the most appropriate for the analysis of the spectrum with various ratios of mixed nuclides. It was thought that this method could be applied to rapid spectrum analysis of the mixed beta ray source.

  4. A spread willingness computing-based information dissemination model.

    Science.gov (United States)

    Huang, Haojing; Cui, Zhiming; Zhang, Shukui

    2014-01-01

    This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user's spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network.

  5. A genetic algorithm based method for neutron spectrum unfolding

    International Nuclear Information System (INIS)

    Suman, Vitisha; Sarkar, P.K.

    2013-03-01

    An approach to neutron spectrum unfolding based on a stochastic evolutionary search mechanism - Genetic Algorithm (GA) is presented. It is tested to unfold a set of simulated spectra, the unfolded spectra is compared to the output of a standard code FERDOR. The method was then applied to a set of measured pulse height spectrum of neutrons from the AmBe source as well as of emitted neutrons from Li(p,n) and Ag(C,n) nuclear reactions carried out in the accelerator environment. The unfolded spectra compared to the output of FERDOR show good agreement in the case of AmBe spectra and Li(p,n) spectra. In the case of Ag(C,n) spectra GA method results in some fluctuations. Necessity of carrying out smoothening of the obtained solution is also studied, which leads to approximation of the solution yielding an appropriate solution finally. Few smoothing techniques like second difference smoothing, Monte Carlo averaging, combination of both and gaussian based smoothing methods are also studied. Unfolded results obtained after inclusion of the smoothening criteria are in close agreement with the output obtained from the FERDOR code. The present method is also tested on a set of underdetermined problems, the outputs of which is compared to the unfolded spectra obtained from the FERDOR applied to a completely determined problem, shows a good match. The distribution of the unfolded spectra is also studied. Uncertainty propagation in the unfolded spectra due to the errors present in the measurement as well as the response function is also carried out. The method appears to be promising for unfolding the completely determined as well as underdetermined problems. It also has provisions to carry out the uncertainty analysis. (author)

  6. Linear theory of equatorial spread F

    International Nuclear Information System (INIS)

    Hudson, M.K.; Kennel, C.F.

    1975-01-01

    A fluid dispersion relation for the drift and interchange (Rayleigh-Taylor) modes in a collisional plasma forms the basis for a linear theory of equatorial spread F. The collisional drift mode growth rate will exceed the growth rate of the Rayleigh-Taylor mode at short perpendicular wavelengths and density gradient scale lengths, and the drift mode can grow on top side as well as on bottom side density gradients. However, below the F peak, where spread F predominates, it is concluded that both the drift and the Rayleigh-Taylor modes contribute to the total spread F spectrum, the Rayleigh-Taylor mode dominating at long and the drift mode at short perpendicular wavelengths above the ion Larmor radius

  7. QC-LDPC code-based cryptography

    CERN Document Server

    Baldi, Marco

    2014-01-01

    This book describes the fundamentals of cryptographic primitives based on quasi-cyclic low-density parity-check (QC-LDPC) codes, with a special focus on the use of these codes in public-key cryptosystems derived from the McEliece and Niederreiter schemes. In the first part of the book, the main characteristics of QC-LDPC codes are reviewed, and several techniques for their design are presented, while tools for assessing the error correction performance of these codes are also described. Some families of QC-LDPC codes that are best suited for use in cryptography are also presented. The second part of the book focuses on the McEliece and Niederreiter cryptosystems, both in their original forms and in some subsequent variants. The applicability of QC-LDPC codes in these frameworks is investigated by means of theoretical analyses and numerical tools, in order to assess their benefits and drawbacks in terms of system efficiency and security. Several examples of QC-LDPC code-based public key cryptosystems are prese...

  8. Using Multimedia to Reveal the Hidden Code of Everyday Behaviour to Children with Autistic Spectrum Disorders (ASDs)

    Science.gov (United States)

    Doyle, Theresa; Arnedillo-Sanchez, Inmaculada

    2011-01-01

    This paper describes a framework which was developed for carers (teachers and parents) to help them create personalised social stories for children with autistic spectrum disorders (ASDs). It explores the social challenges experienced by individuals with ASDs and outlines an intervention aimed at revealing the hidden code that underpins social…

  9. Development and validation of a physics-based urban fire spread model

    OpenAIRE

    HIMOTO, Keisuke; TANAKA, Takeyoshi

    2008-01-01

    A computational model for fire spread in a densely built urban area is developed. The model is distinct from existing models in that it explicitly describes fire spread phenomena with physics-based knowledge achieved in the field of fire safety engineering. In the model, urban fire is interpreted as an ensemble of multiple building fires; that is, the fire spread is simulated by predicting behaviors of individual building fires under the thermal influence of neighboring building fires. Adopte...

  10. Improved Encrypted-Signals-Based Reversible Data Hiding Using Code Division Multiplexing and Value Expansion

    Directory of Open Access Journals (Sweden)

    Xianyi Chen

    2018-01-01

    Full Text Available Compared to the encrypted-image-based reversible data hiding (EIRDH method, the encrypted-signals-based reversible data hiding (ESRDH technique is a novel way to achieve a greater embedding rate and better quality of the decrypted signals. Motivated by ESRDH using signal energy transfer, we propose an improved ESRDH method using code division multiplexing and value expansion. At the beginning, each pixel of the original image is divided into several parts containing a little signal and multiple equal signals. Next, all signals are encrypted by Paillier encryption. And then a large number of secret bits are embedded into the encrypted signals using code division multiplexing and value expansion. Since the sum of elements in any spreading sequence is equal to 0, lossless quality of directly decrypted signals can be achieved using code division multiplexing on the encrypted equal signals. Although the visual quality is reduced, high-capacity data hiding can be accomplished by conducting value expansion on the encrypted little signal. The experimental results show that our method is better than other methods in terms of the embedding rate and average PSNR.

  11. A Spread Willingness Computing-Based Information Dissemination Model

    Science.gov (United States)

    Cui, Zhiming; Zhang, Shukui

    2014-01-01

    This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user's spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network. PMID:25110738

  12. A Spread Willingness Computing-Based Information Dissemination Model

    Directory of Open Access Journals (Sweden)

    Haojing Huang

    2014-01-01

    Full Text Available This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user’s spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network.

  13. Calculations for the intermediate-spectrum cells of Zebra 8 using the MONK Monte-Carlo Code

    International Nuclear Information System (INIS)

    Hanlon, D.; Franklin, B.M.; Stevenson, J.M.

    1987-10-01

    The Monte-Carlo Code MONK 6A and its associated point-energy cross-section data have been used to analyse seven, zero-leakage, plate-geometry cells from the ZEBRA 8 assemblies. The convergence of the calculations was such that the uncertainties in k-infinity and the more important reaction-rate ratios were generally less than the experimental uncertainties. The MONK 6A predictions have been compared with experiment and with predictions from the MURAL collision-probability code. This uses FGL5 data which has been adjusted on the basis of ZEBRA 8 and other integral experiments. The poor predictions from the MONK calculations with errors of up to 10% in k-infinity, are attributed to deficiencies in the database for intermediate to fast spectrum systems. (author)

  14. The octopus burnup and criticality code system

    Energy Technology Data Exchange (ETDEWEB)

    Kloosterman, J.L.; Kuijper, J.C. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Leege, P.F.A. de

    1996-09-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional geometries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (author)

  15. The OCTOPUS burnup and criticality code system

    Energy Technology Data Exchange (ETDEWEB)

    Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kuijper, J.C. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.

    1996-06-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional goemetries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (orig.).

  16. The octopus burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de.

    1996-01-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional geometries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (author)

  17. The OCTOPUS burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de

    1996-06-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional goemetries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (orig.)

  18. Temporal Coding of Volumetric Imagery

    Science.gov (United States)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  19. Modeling spreading of oil slicks based on random walk methods and Voronoi diagrams

    International Nuclear Information System (INIS)

    Durgut, İsmail; Reed, Mark

    2017-01-01

    We introduce a methodology for representation of a surface oil slick using a Voronoi diagram updated at each time step. The Voronoi cells scale the Gaussian random walk procedure representing the spreading process by individual particle stepping. The step length of stochastically moving particles is based on a theoretical model of the spreading process, establishing a relationship between the step length of diffusive spreading and the thickness of the slick at the particle locations. The Voronoi tessellation provides the areal extent of the slick particles and in turn the thicknesses of the slick and the diffusive-type spreading length for all particles. The algorithm successfully simulates the spreading process and results show very good agreement with the analytical solution. Moreover, the results are robust for a wide range of values for computational time step and total number of particles. - Highlights: • A methodology for representation of a surface oil slick using a Voronoi diagram • An algorithm simulating the spreading of oil slick with the Voronoi diagram representation • The algorithm employs the Gaussian random walk method through individual particle stepping. • The diffusive spreading is based on a theoretical model of the spreading process. • Algorithm is computationally robust and successfully reproduces analytical solutions to the spreading process.

  20. A hybrid path-oriented code assignment CDMA-based MAC protocol for underwater acoustic sensor networks.

    Science.gov (United States)

    Chen, Huifang; Fan, Guangyu; Xie, Lei; Cui, Jun-Hong

    2013-11-04

    Due to the characteristics of underwater acoustic channel, media access control (MAC) protocols designed for underwater acoustic sensor networks (UWASNs) are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA) CDMA MAC (POCA-CDMA-MAC), is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA) or receiver-oriented code assignment (ROCA). Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.

  1. A Hybrid Path-Oriented Code Assignment CDMA-Based MAC Protocol for Underwater Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Huifang Chen

    2013-11-01

    Full Text Available Due to the characteristics of underwater acoustic channel, media access control (MAC protocols designed for underwater acoustic sensor networks (UWASNs are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA CDMA MAC (POCA-CDMA-MAC, is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA or receiver-oriented code assignment (ROCA. Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.

  2. Performance of FSO-OFDM based on BCH code

    Directory of Open Access Journals (Sweden)

    Jiao Xiao-lu

    2016-01-01

    Full Text Available As contrasted with the traditional OOK (on-off key system, FSO-OFDM system can resist the atmospheric scattering and improve the spectrum utilization rate effectively. Due to the instability of the atmospheric channel, the system will be affected by various factors, and resulting in a high BER. BCH code has a good error correcting ability, particularly in the short-length and medium-length code, and its performance is close to the theoretical value. It not only can check the burst errors but also can correct the random errors. Therefore, the BCH code is applied to the system to reduce the system BER. At last, the semi-physical simulation has been conducted with MATLAB. The simulation results show that when the BER is 10-2, the performance of OFDM is superior 4dB compared with OOK. In different weather conditions (extension rain, advection fog, dust days, when the BER is 10-5, the performance of BCH (255,191 channel coding is superior 4~5dB compared with uncoded system. All in all, OFDM technology and BCH code can reduce the system BER.

  3. Development of a new nuclide generation and depletion code using a topological solver based on graph theory

    Energy Technology Data Exchange (ETDEWEB)

    Kasselmann, S., E-mail: s.kasselmann@fz-juelich.de [Forschungszentrum Jülich, 52425 Jülich (Germany); Schitthelm, O. [Forschungszentrum Jülich, 52425 Jülich (Germany); Tantillo, F. [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany); Scholthaus, S.; Rössel, C. [Forschungszentrum Jülich, 52425 Jülich (Germany); Allelein, H.-J. [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany)

    2016-09-15

    The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains and therefore speeds up the calculation scheme. Highest priority has been given to the existence of a generic software interface well as an easy handling by making use of XML files for the user input. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach.

  4. An agent-based computational model of the spread of tuberculosis

    International Nuclear Information System (INIS)

    De Espíndola, Aquino L; Bauch, Chris T; Troca Cabella, Brenno C; Martinez, Alexandre Souto

    2011-01-01

    In this work we propose an alternative model of the spread of tuberculosis (TB) and the emergence of drug resistance due to the treatment with antibiotics. We implement the simulations by an agent-based model computational approach where the spatial structure is taken into account. The spread of tuberculosis occurs according to probabilities defined by the interactions among individuals. The model was validated by reproducing results already known from the literature in which different treatment regimes yield the emergence of drug resistance. The different patterns of TB spread can be visualized at any time of the system evolution. The implementation details as well as some results of this alternative approach are discussed

  5. The distribution of triclosan and methyl-triclosan in marine sediments of Barker Inlet, South Australia.

    Science.gov (United States)

    Fernandes, Milena; Shareef, Ali; Kookana, Rai; Gaylard, Sam; Hoare, Sonja; Kildea, Tim

    2011-04-01

    In this work, we investigated the transport and burial of triclosan and its methylated derivative, in surface sediments near the mouth of Barker Inlet in South Australia. The most likely source of this commonly used bactericide to the area is a wastewater outfall discharging at the confluence of the inlet with marine waters. Triclosan was detected in all samples, at concentrations (5-27 μg kg(-1)) comparable to values found in other surface sediments under the influence of marine wastewater outfalls. Its dispersal was closely associated with fine and organic-rich fractions of the sediments. Methyl-triclosan was detected in approximately half of the samples at concentrations compound was linked to both wastewater discharges and biological methylation of the parent compound. Wastewater-borne methyl-triclosan had a smaller spatial footprint than triclosan and was mostly deposited in close proximity to the outfall. In situ methylation of triclosan likely occurs at deeper depositional sites, whereas the absence of methyl-triclosan from shallower sediments was potentially explained by photodegradation of the parent compound. Based on partition equilibrium, a concentration of triclosan in the order of 1 μg L(-1) was estimated in sediment porewaters, a value lower than the threshold reported for harmful effects to occur in the couple of species of marine phytoplankton investigated to date. Methyl-triclosan presents a greater potential for bioaccumulation than triclosan, but the implications of its occurrence to aquatic ecosystem health are difficult to predict given the lack of ecotoxicological data in the current literature.

  6. Sensory evaluation of commercial fat spreads based on oilseeds and walnut

    Directory of Open Access Journals (Sweden)

    Dimić Etelka B.

    2013-01-01

    Full Text Available The main focus of this study was on the sensory evaluation of commercial oilseeds spreads, as the most significant characteristic of this type of product from the consumers’ point of view. Sensory analysis was conducted by five experts using a quantitative descriptive and sensory profile test, applying a scoring method according to the standard procedure. Five different spreads were evaluated: sunflower, pumpkin, sesame, peanut, and walnut. Oil content and amounts of separated oil on the surface were determined for each spread. The results have shown that the color of spreads was very different, depending on the oilseed: gray for sunflower, brown for walnut, yellowish-brown for peanut butter, ivory for sesame and profoundly dark green for pumpkin seeds spread. The flavor and odor of the spreads were characteristic for the raw materials used; however, the sunflower and walnut spreads had a slight rancid flavor. Generally, the spreadability of all spreads was good, but their mouth feel was not acceptable. During the consumption, all of them were sticking immensely to the roof of the mouth, which made the swallowing harder. The highest total score of 16.20 points (max. 20 was obtained for the peanut butter, while the lowest (10.38 was achieved by the sunflower butter. Oil separation (various degrees was noticed in all spreads, which negatively influenced the appearance and entire sensorial quality of the products. The quantity of separated oil depended on the age and total amount of oil in the spreads, and was between 1.13% in the peanut butter and 12.15% in the walnut spread in reference to the net weight of the product. [Projekat Ministarstva nauke Republike Srbije, br. TR 31014: Development of the new functional confectionery products based on oil crops

  7. Simulation of core melt spreading with lava: theoretical background and status of validation

    International Nuclear Information System (INIS)

    Allelein, H.-J.; Breest, A.; Spengler, C.

    2000-01-01

    The goal of this paper is to present the GRS R and D achievements and perspectives of its approach to simulate ex-vessel core melt spreading. The basic idea followed by GRS is the analogy of core melt spreading to volcanic lava flows. A fact first proposed by Robson (1967) and now widely accepted is that lava rheologically behaves as a Bingham fluid, which is characterized by yield stress and plastic viscosity. Recent experimental investigations by Epstein (1996) reveal that corium-concrete mixtures may be described as Bingham fluids. The GRS code LAVA is based on a successful lava flow model, but is adapted to prototypic corium and corium-simulation spreading. Furthermore some detailed physical models such as a thermal crust model on the free melt surface and a model for heat conduction into the substratum are added. Heat losses of the bulk, which is represented by one mean temperature, are now determined by radiation and by temperature profiles in the upper crust and in the substratum. In order to reduce the weak mesh dependence of the original algorithm, a random space method of cellular automata is integrated, which removes the mesh bias without increasing calculation time. LAVA is successfully validated against a lot of experiments using different materials spread. The validation process has shown that LAVA is a robust and fast running code to simulate corium-type spreading. LAVA provides all integral information of practical interest (spreading length, height of the melt after stabilization) and seems to be an appropriate tool for handling large core melt masses within a plant application. (orig.)

  8. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  9. Autism Spectrum Disorder

    Science.gov (United States)

    ... Caregiver Education » Fact Sheets Autism Spectrum Disorder Fact Sheet What is autism spectrum disorder? What are some ... of mutations in individual genes but rather spontaneous coding mutations across many genes. De novo mutations may ...

  10. Towards Data-Driven Simulations of Wildfire Spread using Ensemble-based Data Assimilation

    Science.gov (United States)

    Rochoux, M. C.; Bart, J.; Ricci, S. M.; Cuenot, B.; Trouvé, A.; Duchaine, F.; Morel, T.

    2012-12-01

    Real-time predictions of a propagating wildfire remain a challenging task because the problem involves both multi-physics and multi-scales. The propagation speed of wildfires, also called the rate of spread (ROS), is indeed determined by complex interactions between pyrolysis, combustion and flow dynamics, atmospheric dynamics occurring at vegetation, topographical and meteorological scales. Current operational fire spread models are mainly based on a semi-empirical parameterization of the ROS in terms of vegetation, topographical and meteorological properties. For the fire spread simulation to be predictive and compatible with operational applications, the uncertainty on the ROS model should be reduced. As recent progress made in remote sensing technology provides new ways to monitor the fire front position, a promising approach to overcome the difficulties found in wildfire spread simulations is to integrate fire modeling and fire sensing technologies using data assimilation (DA). For this purpose we have developed a prototype data-driven wildfire spread simulator in order to provide optimal estimates of poorly known model parameters [*]. The data-driven simulation capability is adapted for more realistic wildfire spread : it considers a regional-scale fire spread model that is informed by observations of the fire front location. An Ensemble Kalman Filter algorithm (EnKF) based on a parallel computing platform (OpenPALM) was implemented in order to perform a multi-parameter sequential estimation where wind magnitude and direction are in addition to vegetation properties (see attached figure). The EnKF algorithm shows its good ability to track a small-scale grassland fire experiment and ensures a good accounting for the sensitivity of the simulation outcomes to the control parameters. As a conclusion, it was shown that data assimilation is a promising approach to more accurately forecast time-varying wildfire spread conditions as new airborne-like observations of

  11. Dependence of credit spread and macro-conditions based on an alterable structure model

    Science.gov (United States)

    2018-01-01

    The fat-tail financial data and cyclical financial market makes it difficult for the fixed structure model based on Gaussian distribution to characterize the dynamics of corporate bonds spreads. Using a flexible structure model based on generalized error distribution, this paper focuses on the impact of macro-level factors on the spreads of corporate bonds in China. It is found that in China's corporate bonds market, macroeconomic conditions have obvious structural transformational effects on bonds spreads, and their structural features remain stable with the downgrade of bonds ratings. The impact of macroeconomic conditions on spreads is significant for different structures, and the differences between the structures increase as ratings decline. For different structures, the persistent characteristics of bonds spreads are obviously stronger than those of recursive ones, which suggest an obvious speculation in bonds market. It is also found that the structure switching of bonds with different ratings is not synchronous, which indicates the shift of investment between different grades of bonds. PMID:29723295

  12. Dependence of credit spread and macro-conditions based on an alterable structure model.

    Science.gov (United States)

    Xie, Yun; Tian, Yixiang; Xiao, Zhuang; Zhou, Xiangyun

    2018-01-01

    The fat-tail financial data and cyclical financial market makes it difficult for the fixed structure model based on Gaussian distribution to characterize the dynamics of corporate bonds spreads. Using a flexible structure model based on generalized error distribution, this paper focuses on the impact of macro-level factors on the spreads of corporate bonds in China. It is found that in China's corporate bonds market, macroeconomic conditions have obvious structural transformational effects on bonds spreads, and their structural features remain stable with the downgrade of bonds ratings. The impact of macroeconomic conditions on spreads is significant for different structures, and the differences between the structures increase as ratings decline. For different structures, the persistent characteristics of bonds spreads are obviously stronger than those of recursive ones, which suggest an obvious speculation in bonds market. It is also found that the structure switching of bonds with different ratings is not synchronous, which indicates the shift of investment between different grades of bonds.

  13. A most spectrum-efficient duplexing system: CDD

    Science.gov (United States)

    Lee, William C. Y.

    2001-10-01

    The game to play in wireless communications when it comes to increasing spectrum efficiency is to eliminate interference. Currently, all cellular systems use FDD (Frequency Division Duplexing) in an attempt to eliminate the interference from the adjacent cells. Through the use of many technologies only one type of interference remains and that is the adjacent base-tohome mobile interference. TDD (Time Division Duplexing) has not been used for mobile cellular systems, not only because of the adjacent base-to-home mobile interference, but also because of the additional adjacent base-to-home base interference, and adjacent mobile-to-home mobile interference. Therefore, TDD can only be used for small, confined area systems. CDD (Code Division Duplexing) can eliminate all three kinds of interference; the adjacent base-to-home mobile, the adjacent baseto-home base, and the adjacent mobile- to- home in cellular systems. Eliminating each of these interferences makes CDD the most spectrum efficient duplexing system. This talk will elaborate on a set of smart codes, which will make an efficient CDD system a reality.

  14. Electronic Health Record Based Algorithm to Identify Patients with Autism Spectrum Disorder.

    Directory of Open Access Journals (Sweden)

    Todd Lingren

    Full Text Available Cohort selection is challenging for large-scale electronic health record (EHR analyses, as International Classification of Diseases 9th edition (ICD-9 diagnostic codes are notoriously unreliable disease predictors. Our objective was to develop, evaluate, and validate an automated algorithm for determining an Autism Spectrum Disorder (ASD patient cohort from EHR. We demonstrate its utility via the largest investigation to date of the co-occurrence patterns of medical comorbidities in ASD.We extracted ICD-9 codes and concepts derived from the clinical notes. A gold standard patient set was labeled by clinicians at Boston Children's Hospital (BCH (N = 150 and Cincinnati Children's Hospital and Medical Center (CCHMC (N = 152. Two algorithms were created: (1 rule-based implementing the ASD criteria from Diagnostic and Statistical Manual of Mental Diseases 4th edition, (2 predictive classifier. The positive predictive values (PPV achieved by these algorithms were compared to an ICD-9 code baseline. We clustered the patients based on grouped ICD-9 codes and evaluated subgroups.The rule-based algorithm produced the best PPV: (a BCH: 0.885 vs. 0.273 (baseline; (b CCHMC: 0.840 vs. 0.645 (baseline; (c combined: 0.864 vs. 0.460 (baseline. A validation at Children's Hospital of Philadelphia yielded 0.848 (PPV. Clustering analyses of comorbidities on the three-site large cohort (N = 20,658 ASD patients identified psychiatric, developmental, and seizure disorder clusters.In a large cross-institutional cohort, co-occurrence patterns of comorbidities in ASDs provide further hypothetical evidence for distinct courses in ASD. The proposed automated algorithms for cohort selection open avenues for other large-scale EHR studies and individualized treatment of ASD.

  15. Energy spectrum control for modulated proton beams

    International Nuclear Information System (INIS)

    Hsi, Wen C.; Moyers, Michael F.; Nichiporov, Dmitri; Anferov, Vladimir; Wolanski, Mark; Allgower, Chris E.; Farr, Jonathan B.; Mascia, Anthony E.; Schreuder, Andries N.

    2009-01-01

    In proton therapy delivered with range modulated beams, the energy spectrum of protons entering the delivery nozzle can affect the dose uniformity within the target region and the dose gradient around its periphery. For a cyclotron with a fixed extraction energy, a rangeshifter is used to change the energy but this produces increasing energy spreads for decreasing energies. This study investigated the magnitude of the effects of different energy spreads on dose uniformity and distal edge dose gradient and determined the limits for controlling the incident spectrum. A multilayer Faraday cup (MLFC) was calibrated against depth dose curves measured in water for nonmodulated beams with various incident spectra. Depth dose curves were measured in a water phantom and in a multilayer ionization chamber detector for modulated beams using different incident energy spreads. Some nozzle entrance energy spectra can produce unacceptable dose nonuniformities of up to ±21% over the modulated region. For modulated beams and small beam ranges, the width of the distal penumbra can vary by a factor of 2.5. When the energy spread was controlled within the defined limits, the dose nonuniformity was less than ±3%. To facilitate understanding of the results, the data were compared to the measured and Monte Carlo calculated data from a variable extraction energy synchrotron which has a narrow spectrum for all energies. Dose uniformity is only maintained within prescription limits when the energy spread is controlled. At low energies, a large spread can be beneficial for extending the energy range at which a single range modulator device can be used. An MLFC can be used as part of a feedback to provide specified energy spreads for different energies.

  16. Development of a new nuclide generation and depletion code using a topological solver based on graph theory

    International Nuclear Information System (INIS)

    Kasselmann, S.; Scholthaus, S.; Rössel, C.; Allelein, H.-J.

    2014-01-01

    The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains. This allows to always deal with the smallest nuclide system for the problem of interest. Highest priority has been given to the existence of a generic software interfaces well as an easy handling by making use of XML files for input and output. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach. (author)

  17. Inhomogeneity of epidemic spreading with entropy-based infected clusters.

    Science.gov (United States)

    Wen-Jie, Zhou; Xing-Yuan, Wang

    2013-12-01

    Considering the difference in the sizes of the infected clusters in the dynamic complex networks, the normalized entropy based on infected clusters (δ*) is proposed to characterize the inhomogeneity of epidemic spreading. δ* gives information on the variability of the infected clusters in the system. We investigate the variation in the inhomogeneity of the distribution of the epidemic with the absolute velocity v of moving agent, the infection density ρ, and the interaction radius r. By comparing δ* in the dynamic networks with δH* in homogeneous mode, the simulation experiments show that the inhomogeneity of epidemic spreading becomes smaller with the increase of v, ρ, r.

  18. Improvement of molten core-concrete interaction model of the debris spreading analysis model in the SAMPSON code - 15193

    International Nuclear Information System (INIS)

    Hidaka, M.; Fujii, T.; Sakai, T.

    2015-01-01

    A debris spreading analysis (DSA) module has been developed and improved. The module is used in the severe accident analysis code SAMPSON and it has models for 3-dimensional natural convection with simultaneous spreading, melting and solidification. The existing analysis method of the quasi-3D boundary transportation to simulate downward concrete erosion for evaluation of molten-core concrete interaction (MCCI) was improved to full-3D to solve, for instance, debris lateral erosion under concrete floors at the bottom of the sump pit. In the advanced MCCI model, buffer cells were defined in order to solve numerical problems in case of trammel formation. Mass, momentum, and the advection term of energy between the debris melt cells and the buffer cells are solved. On the other hand, only the heat transfer and thermal conduction are solved between the debris melt cells and the structure cells, and the crust cells and the structure cells. As a preliminary analysis, a validation calculation was performed for erosion that occurred in the core-concrete interaction (CCI-2) test in the OECD/MCCI program. Comparison between the calculation and the CCI-2 test results showed the analysis has the ability to simulate debris lateral erosion under concrete floors. (authors)

  19. Gamma ray spectrum analysis code: sigmas 1.0

    International Nuclear Information System (INIS)

    Siangsanan, P.; Dharmavanij, W.; Chongkum, S.

    1996-01-01

    We have developed Sigmas 1.0 a software package for data reduction and gamma ray spectra evaluation. It is capable of analysing the gamma-ray spectrum in the range of 0-3 MeV by semiconductor detector, i.e. Ge(Li) or HPGe, peak searching, net area determining, plotting and spectrum displaying. There are two methods for calculating the net area under peaks; the Covell method and non-linear fitting by the method of Levenberg and Marquardt which can fit any multiplet peak in the spectrum. The graphic display was rather fast and user friendly

  20. System Based Code: Principal Concept

    International Nuclear Information System (INIS)

    Yasuhide Asada; Masanori Tashimo; Masahiro Ueta

    2002-01-01

    This paper introduces a concept of the 'System Based Code' which has initially been proposed by the authors intending to give nuclear industry a leap of progress in the system reliability, performance improvement, and cost reduction. The concept of the System Based Code intends to give a theoretical procedure to optimize the reliability of the system by administrating every related engineering requirement throughout the life of the system from design to decommissioning. (authors)

  1. Going Multi-viral: Synthedemic Modelling of Internet-based Spreading Phenomena

    Directory of Open Access Journals (Sweden)

    Marily Nika

    2015-02-01

    Full Text Available Epidemics of a biological and technological nature pervade modern life. For centuries, scientific research focused on biological epidemics, with simple compartmental epidemiological models emerging as the dominant explanatory paradigm. Yet there has been limited translation of this effort to explain internet-based spreading phenomena. Indeed, single-epidemic models are inadequate to explain the multimodal nature of complex phenomena. In this paper we propose a novel paradigm for modelling internet-based spreading phenomena based on the composition of multiple compartmental epidemiological models. Our approach is inspired by Fourier analysis, but rather than trigonometric wave forms, our components are compartmental epidemiological models. We show results on simulated multiple epidemic data, swine flu data and BitTorrent downloads of a popular music artist. Our technique can characterise these multimodal data sets utilising a parsimonous number of subepidemic models.

  2. Bacterial spread from cell to cell: beyond actin-based motility.

    Science.gov (United States)

    Kuehl, Carole J; Dragoi, Ana-Maria; Talman, Arthur; Agaisse, Hervé

    2015-09-01

    Several intracellular pathogens display the ability to propagate within host tissues by displaying actin-based motility in the cytosol of infected cells. As motile bacteria reach cell-cell contacts they form plasma membrane protrusions that project into adjacent cells and resolve into vacuoles from which the pathogen escapes, thereby achieving spread from cell to cell. Seminal studies have defined the bacterial and cellular factors that support actin-based motility. By contrast, the mechanisms supporting the formation of protrusions and their resolution into vacuoles have remained elusive. Here, we review recent advances in the field showing that Listeria monocytogenes and Shigella flexneri have evolved pathogen-specific mechanisms of bacterial spread from cell to cell. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  4. Discrete Ramanujan transform for distinguishing the protein coding regions from other regions.

    Science.gov (United States)

    Hua, Wei; Wang, Jiasong; Zhao, Jian

    2014-01-01

    Based on the study of Ramanujan sum and Ramanujan coefficient, this paper suggests the concepts of discrete Ramanujan transform and spectrum. Using Voss numerical representation, one maps a symbolic DNA strand as a numerical DNA sequence, and deduces the discrete Ramanujan spectrum of the numerical DNA sequence. It is well known that of discrete Fourier power spectrum of protein coding sequence has an important feature of 3-base periodicity, which is widely used for DNA sequence analysis by the technique of discrete Fourier transform. It is performed by testing the signal-to-noise ratio at frequency N/3 as a criterion for the analysis, where N is the length of the sequence. The results presented in this paper show that the property of 3-base periodicity can be only identified as a prominent spike of the discrete Ramanujan spectrum at period 3 for the protein coding regions. The signal-to-noise ratio for discrete Ramanujan spectrum is defined for numerical measurement. Therefore, the discrete Ramanujan spectrum and the signal-to-noise ratio of a DNA sequence can be used for distinguishing the protein coding regions from the noncoding regions. All the exon and intron sequences in whole chromosomes 1, 2, 3 and 4 of Caenorhabditis elegans have been tested and the histograms and tables from the computational results illustrate the reliability of our method. In addition, we have analyzed theoretically and gotten the conclusion that the algorithm for calculating discrete Ramanujan spectrum owns the lower computational complexity and higher computational accuracy. The computational experiments show that the technique by using discrete Ramanujan spectrum for classifying different DNA sequences is a fast and effective method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Catalogue to select the initial guess spectrum during unfolding

    CERN Document Server

    Vega-Carrillo, H R

    2002-01-01

    A new method to select the initial guess spectrum is presented. Neutron spectra unfolded from Bonner sphere data are dependent on the initial guess spectrum used in the unfolding code. The method is based on a catalogue of detector count rates calculated from a set of reported neutron spectra. The spectra of three isotopic neutron sources sup 2 sup 5 sup 2 Cf, sup 2 sup 3 sup 9 PuBe and sup 2 sup 5 sup 2 Cf/D sub 2 O, were measured to test the method. The unfolding was carried out using the three initial guess options included in the BUNKIUT code. Neutron spectra were also calculated using MCNP code. Unfolded spectra were compared with those calculated; in all the cases our method gives the best results.

  6. Group representations, error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  7. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    Science.gov (United States)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  8. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    International Nuclear Information System (INIS)

    Ratnam, Challa; Rao, Vadlamudi Lakshmana; Goud, Sivagouni Lachaa

    2006-01-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper

  9. Spectrum optimization-based chaotification using time-delay feedback control

    International Nuclear Information System (INIS)

    Zhou Jiaxi; Xu Daolin; Zhang Jing; Liu Chunrong

    2012-01-01

    Highlights: ► A time-delay feedback controller is designed for chaotification. ► A spectrum optimization method is proposed to determine chaotification parameters. ► Numerical examples verify the spectrum optimization- based chaotification method. ► Engineering application in line spectrum reconfiguration is demonstrated. - Abstract: In this paper, a spectrum optimization method is developed for chaotification in conjunction with an application in line spectrum reconfiguration. A key performance index (the objective function) based on Fourier spectrum is specially devised with the idea of suppressing spectrum spikes and broadening frequency band. Minimization of the index empowered by a genetic algorithm enables to locate favorable parameters of the time-delay feedback controller, by which a line spectrum of harmonic vibration can be transformed into a broad-band continuous spectrum of chaotic motion. Numerical simulations are carried out to verify the feasibility of the method and to demonstrate its effectiveness of chaotifying a 2-DOFs linear mechanical system.

  10. State of art in FE-based fuel performance codes

    International Nuclear Information System (INIS)

    Kim, Hyo Chan; Yang, Yong Sik; Kim, Dae Ho; Bang, Je Geon; Kim, Sun Ki; Koo, Yang Hyun

    2013-01-01

    Fuel performance codes approximate this complex behavior using an axisymmetric, axially-stacked, one-dimensional radial representation to save computation cost. However, the need for improved modeling of PCMI and, particularly, the importance of multidimensional capability for accurate fuel performance simulation has been identified as safety margin decreases. Finite element (FE) method that is reliable and proven solution in mechanical field has been introduced into fuel performance codes for multidimensional analysis. The present state of the art in numerical simulation of FE-based fuel performance predominantly involves 2-D axisymmetric model and 3-D volumetric model. The FRAPCON and FRAPTRAN own 1.5-D and 2-D FE model to simulate PCMI and cladding ballooning. In 2-D simulation, the FALCON code, developed by EPRI, is a 2-D (R-Z and R-θ) fully thermal-mechanically coupled steady-state and transient FE-based fuel behavior code. The French codes TOUTATIS and ALCYONE which are 3-D, and typically used to investigate localized behavior. In 2008, the Idaho National Laboratory (INL) has been developing multidimensional (2-D and 3-D) nuclear fuel performance code called BISON. In this paper, the current state of FE-based fuel performance code and their models are presented. Based on investigation into the codes, requirements and direction of development for new FE-based fuel performance code can be discussed. Based on comparison of models in FE-based fuel performance code, status of art in the codes can be discussed. A new FE-based fuel performance code should include typical pellet and cladding models which all codes own. In particular, specified pellet and cladding model such as gaseous swelling and high burnup structure (HBS) model should be developed to improve accuracy of code as well as consider AC condition. To reduce computation cost, the approximated gap and the optimized contact model should be also developed

  11. GSAP: FORTRAN code for gamma-spectrum analysis

    International Nuclear Information System (INIS)

    Hnatowicz, V.; Kozma, P.; Ilyushchenko, V.I.

    1989-01-01

    The GSAP program performs fully automatic evaluation of gamma-ray energy spectra measured with semiconductor detectors. After the input data comprising experimental spectrum, energy and FWHM calibrations and parameters controlling the peak search are supplied, the program starts peak searching from the spectrum beginning. The detected peaks are arranged into multiplets which are unfolded by standard non-linear least-squares-fit assuming Gaussian peak and linear background. The program proceeds until all multiplets are processed. The determined peak parameters are printed and the result of each particular fit is shown in the graphical form. 6 refs

  12. Design and performance analysis for several new classes of codes for optical synchronous CDMA and for arbitrary-medium time-hopping synchronous CDMA communication systems

    Science.gov (United States)

    Kostic, Zoran; Titlebaum, Edward L.

    1994-08-01

    New families of spread-spectrum codes are constructed, that are applicable to optical synchronous code-division multiple-access (CDMA) communications as well as to arbitrary-medium time-hopping synchronous CDMA communications. Proposed constructions are based on the mappings from integer sequences into binary sequences. We use the concept of number theoretic quadratic congruences and a subset of Reed-Solomon codes similar to the one utilized in the Welch-Costas frequency-hop (FH) patterns. The properties of the codes are as good as or better than the properties of existing codes for synchronous CDMA communications: Both the number of code-sequences within a single code family and the number of code families with good properties are significantly increased when compared to the known code designs. Possible applications are presented. To evaluate the performance of the proposed codes, a new class of hit arrays called cyclical hit arrays is recalled, which give insight into the previously unknown properties of the few classes of number theoretic FH patterns. Cyclical hit arrays and the proposed mappings are used to determine the exact probability distribution functions of random variables that represent interference between users of a time-hopping or optical CDMA system. Expressions for the bit error probability in multi-user CDMA systems are derived as a function of the number of simultaneous CDMA system users, the length of signature sequences and the threshold of a matched filter detector. The performance results are compared with the results for some previously known codes.

  13. A Simple Model to Rank Shellfish Farming Areas Based on the Risk of Disease Introduction and Spread.

    Science.gov (United States)

    Thrush, M A; Pearce, F M; Gubbins, M J; Oidtmann, B C; Peeler, E J

    2017-08-01

    The European Union Council Directive 2006/88/EC requires that risk-based surveillance (RBS) for listed aquatic animal diseases is applied to all aquaculture production businesses. The principle behind this is the efficient use of resources directed towards high-risk farm categories, animal types and geographic areas. To achieve this requirement, fish and shellfish farms must be ranked according to their risk of disease introduction and spread. We present a method to risk rank shellfish farming areas based on the risk of disease introduction and spread and demonstrate how the approach was applied in 45 shellfish farming areas in England and Wales. Ten parameters were used to inform the risk model, which were grouped into four risk themes based on related pathways for transmission of pathogens: (i) live animal movement, (ii) transmission via water, (iii) short distance mechanical spread (birds) and (iv) long distance mechanical spread (vessels). Weights (informed by expert knowledge) were applied both to individual parameters and to risk themes for introduction and spread to reflect their relative importance. A spreadsheet model was developed to determine quantitative scores for the risk of pathogen introduction and risk of pathogen spread for each shellfish farming area. These scores were used to independently rank areas for risk of introduction and for risk of spread. Thresholds were set to establish risk categories (low, medium and high) for introduction and spread based on risk scores. Risk categories for introduction and spread for each area were combined to provide overall risk categories to inform a risk-based surveillance programme directed at the area level. Applying the combined risk category designation framework for risk of introduction and spread suggested by European Commission guidance for risk-based surveillance, 4, 10 and 31 areas were classified as high, medium and low risk, respectively. © 2016 Crown copyright.

  14. Spread Spectrum Modulation by Using Asymmetric-Carrier Random PWM

    DEFF Research Database (Denmark)

    Mathe, Laszlo; Lungeanu, Florin; Sera, Dezso

    2012-01-01

    is very effective and is independent from the modulation index. The flat motor current spectrum generates an acoustical noise close to the white noise, which improves the acoustical performance of the drive. The new carrier wave is easy to implement digitally, without employing any external circuits...

  15. Efficient coding schemes with power allocation using space-time-frequency spreading

    Institute of Scientific and Technical Information of China (English)

    Jiang Haining; Luo Hanwen; Tian Jifeng; Song Wentao; Liu Xingzhao

    2006-01-01

    An efficient space-time-frequency (STF) coding strategy for multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is presented for high bit rate data transmission over frequency selective fading channels. The proposed scheme is a new approach to space-time-frequency coded OFDM (COFDM) that combines OFDM with space-time coding, linear precoding and adaptive power allocation to provide higher quality of transmission in terms of the bit error rate performance and power efficiency. In addition to exploiting the maximum diversity gain in frequency, time and space, the proposed scheme enjoys high coding advantages and low-complexity decoding. The significant performance improvement of our design is confirmed by corroborating numerical simulations.

  16. Code for calculation of spreading of radioactivity in reactor containment systems

    International Nuclear Information System (INIS)

    Vertes, P.

    1992-09-01

    A detailed description of the new version of TIBSO code is given, with applications for accident analysis in a reactor containment system. The TIBSO code can follow the nuclear transition and the spatial migration of radioactive materials. The modelling of such processes is established in a very flexible way enabling the user to investigate a wide range of problems. The TIBSO code system is described in detail, taking into account the new developments since 1983. Most changes improve the capabilities of the code. The new version of TIBSO system is written in FORTRAN-77 and can be operated both under VAX VMS and PC DOS. (author) 5 refs.; 3 figs.; 21 tabs

  17. Generation of pseudo-random sequences for spread spectrum systems

    Science.gov (United States)

    Moser, R.; Stover, J.

    1985-05-01

    The characteristics of pseudo random radio signal sequences (PRS) are explored. The randomness of the PSR is a matter of artificially altering the sequence of binary digits broadcast. Autocorrelations of the two sequences shifted in time, if high, determine if the signals are the same and thus allow for position identification. Cross-correlation can also be calculated between sequences. Correlations closest to zero are obtained with large volume of prime numbers in the sequences. Techniques for selecting optimal and maximal lengths for the sequences are reviewed. If the correlations are near zero in the sequences, then signal channels can accommodate multiple users. Finally, Gold codes are discussed as a technique for maximizing the code lengths.

  18. Many channel spectrum unfolding

    International Nuclear Information System (INIS)

    Najzer, M.; Glumac, B.; Pauko, M.

    1980-01-01

    The principle of the ITER unfolding code as used for the many channel spectrum unfolding is described. Its unfolding ability is tested on seven typical neutron spectra. The effect of the initial spectrum approximation upon the solution is discussed

  19. Design of deterministic interleaver for turbo codes

    International Nuclear Information System (INIS)

    Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.

    2008-01-01

    The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)

  20. The VULCANO VE-U7 Corium spreading benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Journeau, Christophe; Haquet, Jean-Francois [CEA Cadarache, Severe Accident Mastering experimental Laboratory (DEN/DTN/STRI/LMA), 13108 St Paul lez Durance (France); Spindler, Bertrand [CEA Grenoble, Physicochemistry and Multiphasic Thermalhydraulics Laboratory (DEN/DTN/SE2T/LPTM), 17 rue des Martyrs, F-38054 Grenoble CEDEX 9 (France); Spengler, Claus [Gesellschaft fuer Reaktorsicherheit mbH, Department for Thermohydraulics/Process Engineering, Schwertnergasse 1, D-50667 Koeln (Germany); Foit, Jerzy [Forschungszentrum Karlsruhe GmbH, Institut fuer Kern nd Energietechnik (IKET), P.O. Box 3640, D-76021 Karlsruhe (Germany)

    2006-07-01

    In a hypothetical nuclear reactor severe accident, corium spreading is one possible mitigation measure that has been selected for the EPR design. A post-test benchmark exercise has been organized on the VULCANO VE-U7 corium spreading experiment. In this test, a prototypic corium mixture representative of what could be expected at the opening of EPR reactor-pit gate has been spread on siliceous concrete and on a reference channel in inert refractory ceramic. The spreading progression was not much affected by the presence of concrete and sparging gases. The procedure used to estimate the corium physical properties from its composition and temperature provided a satisfactory data set. The CORFLOW, LAVA and THEMA codes provide satisfactory calculations of the spreading front evolution and of its final length. LAVA and THEMA estimations of the substrate temperatures, which are the initial conditions for longer term Molten Core Concrete Interaction or Corium Ceramic Interaction computations, are also close to the measured data, within the experimental uncertainties. (authors)

  1. The VULCANO VE-U7 Corium spreading benchmark

    International Nuclear Information System (INIS)

    Journeau, Christophe; Haquet, Jean-Francois; Spindler, Bertrand; Spengler, Claus; Foit, Jerzy

    2006-01-01

    In a hypothetical nuclear reactor severe accident, corium spreading is one possible mitigation measure that has been selected for the EPR design. A post-test benchmark exercise has been organized on the VULCANO VE-U7 corium spreading experiment. In this test, a prototypic corium mixture representative of what could be expected at the opening of EPR reactor-pit gate has been spread on siliceous concrete and on a reference channel in inert refractory ceramic. The spreading progression was not much affected by the presence of concrete and sparging gases. The procedure used to estimate the corium physical properties from its composition and temperature provided a satisfactory data set. The CORFLOW, LAVA and THEMA codes provide satisfactory calculations of the spreading front evolution and of its final length. LAVA and THEMA estimations of the substrate temperatures, which are the initial conditions for longer term Molten Core Concrete Interaction or Corium Ceramic Interaction computations, are also close to the measured data, within the experimental uncertainties. (authors)

  2. Attention-spreading based on hierarchical spatial representations for connected objects.

    Science.gov (United States)

    Kasai, Tetsuko

    2010-01-01

    Attention selects objects or groups as the most fundamental unit, and this may be achieved through a process in which attention automatically spreads throughout their entire region. Previously, we found that a lateralized potential relative to an attended hemifield at occipito-temporal electrode sites reflects attention-spreading in response to connected bilateral stimuli [Kasai, T., & Kondo, M. Electrophysiological correlates of attention-spreading in visual grouping. NeuroReport, 18, 93-98, 2007]. The present study examined the nature of object representations by manipulating the extent of grouping through connectedness, while controlling the symmetrical structure of bilateral stimuli. The electrophysiological results of two experiments consistently indicated that attention was guided twice in association with perceptual grouping in the early phase (N1, 150-200 msec poststimulus) and with the unity of an object in the later phase (N2pc, 310/330-390 msec). This suggests that there are two processes in object-based spatial selection, and these are discussed with regard to their cognitive mechanisms and object representations.

  3. Modeling potential Emerald Ash Borer spread through GIS/cell-based/gravity models with data bolstered by web-based inputs

    Science.gov (United States)

    Louis R. Iverson; Anantha M. Prasad; Davis Sydnor; Jonathan Bossenbroek; Mark W. Schwartz; Mark W. Schwartz

    2006-01-01

    We model the susceptibility and potential spread of the organism across the eastern United States and especially through Michigan and Ohio using Forest Inventory and Analysis (FIA) data. We are also developing a cell-based model for the potential spread of the organism. We have developed a web-based tool for public agencies and private individuals to enter the...

  4. An agent-based computational model for tuberculosis spreading on age-structured populations

    Science.gov (United States)

    Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.

    2015-06-01

    In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.

  5. Measurement of 235U fission spectrum-averaged cross sections and neutron spectrum adjusted with the activation data

    International Nuclear Information System (INIS)

    Kobayashi, Katsuhei; Kobayashi, Tooru

    1992-01-01

    The 235 U fission spectrum-averaged cross sections for 13 threshold reactions were measured with the fission plate (27 cm in diameter and 1.1 cm thick) at the heavy water thermal neutron facility of the Kyoto University Reactor. The Monte Carlo code MCNP was applied to check the deviation from the 235 U fission neutron spectrum due to the room-scattered neutrons, and it was found that the resultant spectrum was close to that of 235 U fission neutrons. Supplementally, the relations to derive the absorbed dose rates with the fission plate were also given using the calculated neutron spectra and the neutron Kerma factors. Finally, the present values of the fission spectrum-averaged cross sections were employed to adjust the 235 U fission neutron spectrum with the NEUPAC code. The adjusted spectrum showed a good agreement with the Watt-type fission neutron spectrum. (author)

  6. Investigation on influence of crust formation on VULCANO VE-U7 corium spreading with MPS method

    International Nuclear Information System (INIS)

    Yasumura, Yusan; Yamaji, Akifumi; Furuya, Masahiro; Ohishi, Yuji; Duan, Guangtao

    2017-01-01

    Highlights: • The new crust formation model was developed for the MPS spreading analysis code. • The VULCANO VE-U7 corium spreading experiment was analyzed by the developed code. • The termination of the spreading was governed by the crust formation at the leading edge. - Abstract: In a severe accident of a light water reactor, the corium spreading behavior on a containment floor is important as it may threaten the containment vessel integrity. The Moving Particle Semi-implicit (MPS) method is one of the Lagrangian particle methods for simulation of incompressible flow. In this study, the MPS method is further developed to simulate corium spreading involving not only flow, but also heat transfer, phase change and thermo-physical property change of corium. A new crust formation model was developed, in which, immobilization of crust was modeled by stopping the particle movement when its solid fraction is above the threshold and is in contact with the substrate or any other immobilized particles. The VULCANO VE-U7 corium spreading experiment was analyzed by the developed MPS spreading analysis code to investigate influences of different particle sizes, the corium viscosity changes, and the “immobilization solid fraction” of the crust formation model on the spreading and its termination. Viscosity change of the corium was influential to the overall progression of the spreading leading edge, whereas termination of the spreading was primarily determined by the immobilization of the leading edge (i.e., crust formation). The progression of the leading edge and termination of the spreading were well predicted, but the simulation overestimated the substrate temperature. Further investigations may be necessary for the future study to see if thermal resistance at the corium-substrate boundary has significant influence on the overall spreading behavior and its termination.

  7. Evaluation of geometrical contributions to the spread of the Compton-scatter energy distribution

    International Nuclear Information System (INIS)

    Hanson, A.L.; Gigante, G.E.; Dipartimento di Fisica, Universita degli Studi di Roma I, ''La Sapienza,'' Corso Vittorio Emanuele II, 244, 00186 Roma, Italy)

    1989-01-01

    The spectrum from Compton-scattered x rays is an inherently broad distribution. This distribution is the sum of several Gaussian-like distributions, which gives the sum its unique shape. The Gaussian-like distributions are the result of convoluting the so-called Compton profile, the spread in the scattered-x-ray energies due to the momentum distributions of the target electrons, with the detector response and the geometrical effects. The distribution is then further modified by the absorption within the sample. A formulation for both qualitatively and quantitatively determining the magnitude of the geometrical contributions is presented. This formulation is based on a recently devised approach to the scattering geometry [Hanson, Gigante, Meron, Phys. Rev. Lett. 61, 135 (1988)]. A methodology for determining the geometrical spread in the energy of the scattered x rays is presented. The results can be conveniently used to optimize scattering geometries for the reduction of the geometry-caused spread

  8. Information spreading dynamics in hypernetworks

    Science.gov (United States)

    Suo, Qi; Guo, Jin-Li; Shen, Ai-Zhong

    2018-04-01

    Contact pattern and spreading strategy fundamentally influence the spread of information. Current mathematical methods largely assume that contacts between individuals are fixed by networks. In fact, individuals are affected by all his/her neighbors in different social relationships. Here, we develop a mathematical approach to depict the information spreading process in hypernetworks. Each individual is viewed as a node, and each social relationship containing the individual is viewed as a hyperedge. Based on SIS epidemic model, we construct two spreading models. One model is based on global transmission, corresponding to RP strategy. The other is based on local transmission, corresponding to CP strategy. These models can degenerate into complex network models with a special parameter. Thus hypernetwork models extend the traditional models and are more realistic. Further, we discuss the impact of parameters including structure parameters of hypernetwork, spreading rate, recovering rate as well as information seed on the models. Propagation time and density of informed nodes can reveal the overall trend of information dissemination. Comparing these two models, we find out that there is no spreading threshold in RP, while there exists a spreading threshold in CP. The RP strategy induces a broader and faster information spreading process under the same parameters.

  9. Performance analysis of wavelength/spatial coding system with fixed in-phase code matrices in OCDMA network

    Science.gov (United States)

    Tsai, Cheng-Mu; Liang, Tsair-Chun

    2011-12-01

    This paper proposes a wavelength/spatial (W/S) coding system with fixed in-phase code (FIPC) matrix in the optical code-division multiple-access (OCDMA) network. A scheme is presented to form the FIPC matrix which is applied to construct the W/S OCDMA network. The encoder/decoder in the W/S OCDMA network is fully able to eliminate the multiple-access-interference (MAI) at the balanced photo-detectors (PD), according to fixed in-phase cross correlation. The phase-induced intensity noise (PIIN) related to the power square is markedly suppressed in the receiver by spreading the received power into each PD while the net signal power is kept the same. Simulation results show that the W/S OCDMA network based on the FIPC matrices cannot only completely remove the MAI but effectively suppress the PIIN to upgrade the network performance.

  10. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    Science.gov (United States)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  11. Intercluster Connection in Cognitive Wireless Mesh Networks Based on Intelligent Network Coding

    Science.gov (United States)

    Chen, Xianfu; Zhao, Zhifeng; Jiang, Tao; Grace, David; Zhang, Honggang

    2009-12-01

    Cognitive wireless mesh networks have great flexibility to improve spectrum resource utilization, within which secondary users (SUs) can opportunistically access the authorized frequency bands while being complying with the interference constraint as well as the QoS (Quality-of-Service) requirement of primary users (PUs). In this paper, we consider intercluster connection between the neighboring clusters under the framework of cognitive wireless mesh networks. Corresponding to the collocated clusters, data flow which includes the exchanging of control channel messages usually needs four time slots in traditional relaying schemes since all involved nodes operate in half-duplex mode, resulting in significant bandwidth efficiency loss. The situation is even worse at the gateway node connecting the two colocated clusters. A novel scheme based on network coding is proposed in this paper, which needs only two time slots to exchange the same amount of information mentioned above. Our simulation shows that the network coding-based intercluster connection has the advantage of higher bandwidth efficiency compared with the traditional strategy. Furthermore, how to choose an optimal relaying transmission power level at the gateway node in an environment of coexisting primary and secondary users is discussed. We present intelligent approaches based on reinforcement learning to solve the problem. Theoretical analysis and simulation results both show that the intelligent approaches can achieve optimal throughput for the intercluster relaying in the long run.

  12. Premorbid neurocognitive functioning in schizophrenia spectrum disorder

    DEFF Research Database (Denmark)

    Sørensen, Holger Jelling; Mortensen, E.L.; Parnas, Josef

    2006-01-01

    in WISC IQ. Logistic regression analysis controlling for age at examination, gender, and social status yielded a significant, but relatively weak, association between low Coding test score and risk of schizophrenia spectrum disorder. For each unit increase in the Coding raw score, the adjusted odds ratio...... in adolescence, the aim of the present prospective study was to examine whether low scores on Coding is associated with the risk of developing schizophrenia spectrum disorders. The 12 subtests of the WISC were administered to 311 children and adolescents with a mean age of 15.1 years (range: 8 to 20 years......), and the diagnostic assessment (DSM-IIIR) was conducted by senior clinicians 25 years later. The group with schizophrenia spectrum disorder consisted of 84 individuals, and this group obtained significantly lower scores on Coding than nonschizophrenic controls. This difference could not be explained by differences...

  13. Airship Sparse Array Antenna Radar Real Aperture Imaging Based on Compressed Sensing and Sparsity in Transform Domain

    Directory of Open Access Journals (Sweden)

    Li Liechen

    2016-02-01

    Full Text Available A conformal sparse array based on combined Barker code is designed for airship platform. The performance of the designed array such as signal-to-noise ratio is analyzed. Using the hovering characteristics of the airship, interferometry operation can be applied on the real aperture imaging results of two pulses, which can eliminate the random backscatter phase and make the image sparse in the transform domain. Building the relationship between echo and transform coefficients, the Compressed Sensing (CS theory can be introduced to solve the formula and achieving imaging. The image quality of the proposed method can reach the image formed by the full array imaging. The simulation results show the effectiveness of the proposed method.

  14. Modelling of fire spread in car parks

    NARCIS (Netherlands)

    Noordijk, L.M.; Lemaire, A.D.

    2005-01-01

    Currently, design codes assume that in a car park fire at most 3-4 vehicles are on fire at the same time. Recent incidents in car parks have drawn international attention to such assumptions and have raised questions as to the fire spreading mechanism and the resulting fire load on the structure.

  15. Spread effects - methodology

    International Nuclear Information System (INIS)

    2004-01-01

    Diffusion of technology, environmental effects and rebound effects are the principal effects from the funding of renewable energy and energy economising. It is difficult to estimate the impact of the spread effects both prior to the measures are implemented and after the measures are carried out. Statistical methods can be used to estimate the spread effects, but they are insecure and always need to be complemented with qualitative and subjective evaluations. It is more adequate to evaluate potential spread effects from market and market data surveillance for a selection of technologies and parties. Based on this information qualitative indicators for spread effects can be constructed and used both ex ante and ex post (ml)

  16. State of art in FE-based fuel performance codes

    International Nuclear Information System (INIS)

    Kim, Hyo Chan; Yang, Yong Sik; Kim, Dae Ho; Bang, Je Geon; Kim, Sun Ki; Koo, Yang Hyun

    2013-01-01

    Finite element (FE) method that is reliable and proven solution in mechanical field has been introduced into fuel performance codes for multidimensional analysis. The present state of the art in numerical simulation of FE-based fuel performance predominantly involves 2-D axisymmetric model and 3-D volumetric model. The FRAPCON and FRAPTRAN own 1.5-D and 2-D FE model to simulate PCMI and cladding ballooning. In 2-D simulation, the FALCON code, developed by EPRI, is a 2-D (R-Z and R-θ) fully thermal-mechanically coupled steady-state and transient FE-based fuel behavior code. The French codes TOUTATIS and ALCYONE which are 3-D, and typically used to investigate localized behavior. In 2008, the Idaho National Laboratory (INL) has been developing multidimensional (2-D and 3-D) nuclear fuel performance code called BISON. In this paper, the current state of FE-based fuel performance code and their models are presented. Based on investigation into the codes, requirements and direction of development for new FE-based fuel performance code can be discussed. Based on comparison of models in FE-based fuel performance code, status of art in the codes can be discussed. A new FE-based fuel performance code should include typical pellet and cladding models which all codes own. In particular, specified pellet and cladding model such as gaseous swelling and high burnup structure (HBS) model should be developed to improve accuracy of code as well as consider AC condition. To reduce computation cost, the approximated gap and the optimized contact model should be also developed. Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena, occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. This multiphysics behavior is often tightly coupled, a well known example being the thermomechanical behavior. Adding to this complexity, important aspects of fuel behavior are inherently

  17. Incorporating Code-Based Software in an Introductory Statistics Course

    Science.gov (United States)

    Doehler, Kirsten; Taylor, Laura

    2015-01-01

    This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…

  18. Implementation of LT codes based on chaos

    International Nuclear Information System (INIS)

    Zhou Qian; Li Liang; Chen Zengqiang; Zhao Jiaxiang

    2008-01-01

    Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator. (general)

  19. Computationally Efficient Chaotic Spreading Sequence Selection for Asynchronous DS-CDMA

    Directory of Open Access Journals (Sweden)

    Litviņenko Anna

    2017-12-01

    Full Text Available The choice of the spreading sequence for asynchronous direct-sequence code-division multiple-access (DS-CDMA systems plays a crucial role for the mitigation of multiple-access interference. Considering the rich dynamics of chaotic sequences, their use for spreading allows overcoming the limitations of the classical spreading sequences. However, to ensure low cross-correlation between the sequences, careful selection must be performed. This paper presents a novel exhaustive search algorithm, which allows finding sets of chaotic spreading sequences of required length with a particularly low mutual cross-correlation. The efficiency of the search is verified by simulations, which show a significant advantage compared to non-selected chaotic sequences. Moreover, the impact of sequence length on the efficiency of the selection is studied.

  20. Development of OCDMA system based on Flexible Cross Correlation (FCC) code with OFDM modulation

    Science.gov (United States)

    Aldhaibani, A. O.; Aljunid, S. A.; Anuar, M. S.; Arief, A. R.; Rashidi, C. B. M.

    2015-03-01

    The performance of the OCDMA systems is governed by numerous quantitative parameters such as the data rate, simultaneous number of users, the powers of transmitter and receiver, and the type of codes. This paper analyzes the performance of the OCDMA system using OFDM technique to enhance the channel data rate, to save power and increase the number of user of OSCDMA systems compared with previous hybrid subcarrier multiplexing/optical spectrum code division multiplexing (SCM/OSCDM) system. The average received signal to noise ratio (SNR) with the nonlinearity of subcarriers is derived. The theoretical results have been evaluated based on BER and number of users as well as amount of power saved. The proposed system gave better performance and save around -6 dBm of the power as well as increase the number of users twice compare to SCM/OCDMA system. In addition it is robust against interference and much more spectrally efficient than SCM/OCDMA system. The system was designed based on Flexible Cross Correlation (FCC) code which is easier construction, less complexity of encoder/decoder design and flexible in-phase cross-correlation for uncomplicated to implement using Fiber Bragg Gratings (FBGs) for the OCDMA systems for any number of users and weights. The OCDMA-FCC_OFDM improves the number of users (cardinality) 108% compare to SCM/ODCMA-FCC system.

  1. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  2. Fast Computation of Pulse Height Spectra Using SGRD Code

    Directory of Open Access Journals (Sweden)

    Humbert Philippe

    2017-01-01

    Full Text Available SGRD (Spectroscopy, Gamma rays, Rapid, Deterministic code is used for fast calculation of the gamma ray spectrum produced by a spherical shielded source and measured by a detector. The photon source lines originate from the radioactive decay of the unstable isotopes. The emission rate and spectrum of these primary sources are calculated using the DARWIN code. The leakage spectrum is separated in two parts, the uncollided component is transported by ray-tracing and the scattered component is calculated using a multigroup discrete ordinates method. The pulsed height spectrum is then simulated by folding the leakage spectrum with the detector response functions which are pre-calculated using MCNP5 code for each considered detector type. An application to the simulation of the gamma spectrum produced by a natural uranium ball coated with plexiglass and measured using a NaI detector is presented.

  3. Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD

    Energy Technology Data Exchange (ETDEWEB)

    Trambauer, K. [GRS, Garching (Germany)

    1997-07-01

    The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonable accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.

  4. Non-Binary Protograph-Based LDPC Codes: Analysis,Enumerators and Designs

    OpenAIRE

    Sun, Yizeng

    2013-01-01

    Non-binary LDPC codes can outperform binary LDPC codes using sum-product algorithm with higher computation complexity. Non-binary LDPC codes based on protographs have the advantage of simple hardware architecture. In the first part of this thesis, we will use EXIT chart analysis to compute the thresholds of different protographs over GF(q). Based on threshold computation, some non-binary protograph-based LDPC codes are designed and their frame error rates are compared with binary LDPC codes. ...

  5. Trellis coding with Continuous Phase Modulation (CPM) for satellite-based land-mobile communications

    Science.gov (United States)

    1989-01-01

    This volume of the final report summarizes the results of our studies on the satellite-based mobile communications project. It includes: a detailed analysis, design, and simulations of trellis coded, full/partial response CPM signals with/without interleaving over various Rician fading channels; analysis and simulation of computational cutoff rates for coherent, noncoherent, and differential detection of CPM signals; optimization of the complete transmission system; analysis and simulation of power spectrum of the CPM signals; design and development of a class of Doppler frequency shift estimators; design and development of a symbol timing recovery circuit; and breadboard implementation of the transmission system. Studies prove the suitability of the CPM system for mobile communications.

  6. A Comparative Study between Codes of Spectrum for a Single Degree of Freedom (SDOF) System in Two Different Hazardous Regions

    Energy Technology Data Exchange (ETDEWEB)

    Pour, P Moradi; Noorzaei, J [Institute of Advanced Technology (ITMA), Universiti Putra Malaysia, 43400 UPM Serdang, Selangor Darul Ehsan (Malaysia); Karisiddappa [Vice Principal of Malnad College of Engineering, 573201 Hassan, Karnataka (India); Jaafar, M S, E-mail: jamal@eng.upm.edu.my [Department of Civil Engineering, Universiti Putra Malaysia, 43400 UPM Serdang, Selangor Darul Ehsan (Malaysia)

    2011-02-15

    Since in structure and earthquake engineering design of structures using response spectrum method (RSM) is very important, this study has been performed for a single degree of freedom (SDOF) system. Firstly the concept and the way of construction of response spectrum has been briefly explained. Then the records of some strong earthquakes in USA and Iran as two hazardous regions have been plotted, after that selected response spectrums (RS) of each country has been compared with each other and finally with standard response code of its own country. It was concluded:1) For a given ground motion the response of a SDOF system only depends on its natural vibration period (T) and damping ratio({zeta}).2) When the effective damping ratio of a structure increases, its dynamic responses will decrease which demands the use of higher value of damping ratio in the structure. Also the FORTRAN computer programme for solving the Duhamel's Integral has been improved in this paper.

  7. Four year-olds use norm-based coding for face identity.

    Science.gov (United States)

    Jeffery, Linda; Read, Ainsley; Rhodes, Gillian

    2013-05-01

    Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged children also use norm-based coding. We reasoned that the transition to school could be critical in developing a norm-based system because school places new demands on children's face identification skills and substantially increases experience with faces. Consistent with this view, face identification performance improves steeply between ages 4 and 7. We used face identity aftereffects to test whether norm-based coding emerges between these ages. We found that 4 year-old children, like adults, showed larger face identity aftereffects for adaptors far from the average than for adaptors closer to the average, consistent with use of norm-based coding. We conclude that experience prior to age 4 is sufficient to develop a norm-based face-space and that failure to use norm-based coding cannot explain 4 year-old children's poor face identification skills. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Multifractal signal reconstruction based on singularity power spectrum

    International Nuclear Information System (INIS)

    Xiong, Gang; Yu, Wenxian; Xia, Wenxiang; Zhang, Shuning

    2016-01-01

    Highlights: • We propose a novel multifractal reconstruction method based on singularity power spectrum analysis (MFR-SPS). • The proposed MFR-SPS method has better power characteristic than the algorithm in Fraclab. • Further, the SPS-ISE algorithm performs better than the SPS-MFS algorithm. • Based on the proposed MFR-SPS method, we can restructure singularity white fractal noise (SWFN) and linear singularity modulation (LSM) multifractal signal, in equivalent sense, similar with the linear frequency modulation(LFM) signal and WGN in the Fourier domain. - Abstract: Fractal reconstruction (FR) and multifractal reconstruction (MFR) can be considered as the inverse problem of singularity spectrum analysis, and it is challenging to reconstruct fractal signal in accord with multifractal spectrum (MFS). Due to the multiple solutions of fractal reconstruction, the traditional methods of FR/MFR, such as FBM based method, wavelet based method, random wavelet series, fail to reconstruct fractal signal deterministically, and besides, those methods neglect the power spectral distribution in the singular domain. In this paper, we propose a novel MFR method based singularity power spectrum (SPS). Supposing the consistent uniform covering of multifractal measurement, we control the traditional power law of each scale of wavelet coefficients based on the instantaneous singularity exponents (ISE) or MFS, simultaneously control the singularity power law based on the SPS, and deduce the principle and algorithm of MFR based on SPS. Reconstruction simulation and error analysis of estimated ISE, MFS and SPS show the effectiveness and the improvement of the proposed methods compared to those obtained by the Fraclab package.

  9. A synthetic method of solar spectrum based on LED

    Science.gov (United States)

    Wang, Ji-qiang; Su, Shi; Zhang, Guo-yu; Zhang, Jian

    2017-10-01

    A synthetic method of solar spectrum which based on the spectral characteristics of the solar spectrum and LED, and the principle of arbitrary spectral synthesis was studied by using 14 kinds of LED with different central wavelengths.The LED and solar spectrum data were selected by Origin Software firstly, then calculated the total number of LED for each center band by the transformation relation between brightness and illumination and Least Squares Curve Fit in Matlab.Finally, the spectrum curve of AM1.5 standard solar spectrum was obtained. The results met the technical indexes of the solar spectrum matching with ±20% and the solar constant with >0.5.

  10. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B [Instrument Department, College of Mechatronics Engineering and Automation, National University of Defense Technology, ChangSha, Hunan, 410073 (China)

    2006-10-15

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily.

  11. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    International Nuclear Information System (INIS)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B

    2006-01-01

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily

  12. Thermal-spectrum recriticality energetics

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1993-12-01

    Large computer codes have been created in the past to predict the energy release in hypothetical core disruptive accidents (CDA), postulated to occur in liquid metal reactors (LMR). These codes, such as SIMMER, are highly specific to LMR designs. More recent attention has focused on thermal-spectrum criticality accidents, such as for fuel storage basins and waste tanks containing fissile material. This paper resents results from recent one-dimensional kinetics simulations, performed for a recriticality accident in a thermal spectrum. Reactivity insertion rates generally are smaller than in LMR CDAs, and the energetics generally are more benign. Parametric variation of input was performed, including reactivity insertion and initial temperature

  13. Experimental demonstration of 2.5 Gbit/S incoherent two-dimensional optical code division multiple access system

    International Nuclear Information System (INIS)

    Glesk, I.; Baby, V.; Bres, C.-S.; Xu, L.; Rand, D.; Prucnal, P.R.

    2004-01-01

    We demonstrated error-free operation of 4 simultaneous users in a fast frequency-hopping time-spreading optical code division multiple access system operating at 2.5 Gbit/s a Star architecture. Effective power penalty was ≤0.5dB. Novel optical code division multiple access receiver based on Terahertz Optical Asymmetric Demultiplexer was demonstrated to eliminate multiple access interference (Authors)

  14. Asynchronous, Decentralized DS-CDMA Using Feedback-Controlled Spreading Sequences for Time-Dispersive Channels

    Science.gov (United States)

    Miyatake, Teruhiko; Chiba, Kazuki; Hamamura, Masanori; Tachikawa, Shin'ichi

    We propose a novel asynchronous direct-sequence codedivision multiple access (DS-CDMA) using feedback-controlled spreading sequences (FCSSs) (FCSS/DS-CDMA). At the receiver of FCSS/DS-CDMA, the code-orthogonalizing filter (COF) produces a spreading sequence, and the receiver returns the spreading sequence to the transmitter. Then the transmitter uses the spreading sequence as its updated version. The performance of FCSS/DS-CDMA is evaluated over time-dispersive channels. The results indicate that FCSS/DS-CDMA greatly suppresses both the intersymbol interference (ISI) and multiple access interference (MAI) over time-invariant channels. FCSS/DS-CDMA is applicable to the decentralized multiple access.

  15. The application of γ-scintigraphy for the evaluation of the relative spreading of suppository bases in rectal hard gelatin capsules

    International Nuclear Information System (INIS)

    Hardy, J.G.; Wood, E.; Feely, L.C.; Davis, S.S.

    1987-01-01

    The relative spreading of suppository base and incorporated suspension in the rectum of human subjects has been followed using the technique of γ-scintigraphy. Suppositories formulated from a surfactant system, Labrafil WL2514, and a standard triglyceride base, Witepsol H15, did not spread to a particularly great extent. When spreading did occur the movement of the base did not necessarily lead to a similar spreading of the suspended material. Such separation of suspended material from the base was greater for the surfactant system than for the simple triglyceride system. 10 refs.; 8 figs

  16. Energy efficiency in wireless communication systems

    Science.gov (United States)

    Caffrey, Michael Paul; Palmer, Joseph McRae

    2012-12-11

    Wireless communication systems and methods utilize one or more remote terminals, one or more base terminals, and a communication channel between the remote terminal(s) and base terminal(s). The remote terminal applies a direct sequence spreading code to a data signal at a spreading factor to provide a direct sequence spread spectrum (DSSS) signal. The DSSS signal is transmitted over the communication channel to the base terminal which can be configured to despread the received DSSS signal by a spreading factor matching the spreading factor utilized to spread the data signal. The remote terminal and base terminal can dynamically vary the matching spreading factors to adjust the data rate based on an estimation of operating quality over time between the remote terminal and base terminal such that the amount of data being transmitted is substantially maximized while providing a specified quality of service.

  17. WCGM. A gamma-spectrum analysis program rewritten in Windows

    International Nuclear Information System (INIS)

    Szekely, G.

    2009-01-01

    Complete text of publication follows. Introduction The original code was written in Fortran in 1985 and it worked mostly in batch mode. Later the code was redesigned in Pascal and several graphics tools were added. This version (called PGM) is still used, but the limits of MSDOS (memory, graphics, filename length, etc.) make it more and more obsolete. Because of these reasons the redesign of the code is started in order to be able to use it on the most frequently used operating systems, which are nowadays Windows XP and Windows 7. This paper describes the present state of this work and shows some new ways of the usage of the code. In the same time it invites the reader to visit the home page of the code in order to contribute to the further development. Data input Probably one of the main reasons why the original DOS code is still used is that a lot of input formats are accepted. The following short names identify the currently available input spectrum formats: 'ASCII', 'Binary', 'I-format', 'Jyvaeskylae', 'K-format', 'MCA', 'MCAtxt', 'MSI', 'NBI', 'Ortec', 'Oxford/ Nucleus', 'PCA9', 'Tukan'. The most simple ones (ASCII, Binary) contain only the counts of the gamma spectrum, but the more sophisticated ones (I-format, Tukan) also include data for the energy, resolution and efficiency calibrations. The user can choose different levels of interaction during the input process. One can put everything to a control data file and tell the code to use it, or one can manually find the input spectrum and the calibration data during running the code. Spectrum processing After reading the input data the whole spectrum is shown in the main window. To find the peak positions one can use 'Process| Automatic peak search' menu item to scan the displayed interval of the spectrum and assign the peaks with black vertical lines, which were selected by the built-in algorithm. Then the user can fine tune this peak set by manually insert or remove peaks, with the help of the right hand

  18. [Voxel-Based Morphometry in Autism Spectrum Disorder].

    Science.gov (United States)

    Yamasue, Hidenori

    2017-05-01

    Autism spectrum disorder shows deficits in social communication and interaction including nonverbal communicative behaviors (e.g., eye contact, gestures, voice prosody, and facial expressions) and restricted and repetitive behaviors as its core symptoms. These core symptoms are emerged as an atypical behavioral development in toddlers with the disorder. Atypical neural development is considered to be a neural underpinning of such behaviorally atypical development. A number of studies using voxel-based morphometry have already been conducted to compare regional brain volumes between individuals with autism spectrum disorder and those with typical development. Furthermore, more than ten papers employing meta-analyses of the comparisons using voxel based morphometry between individuals with autism spectrum disorder and those with typical development have already been published. The current review paper adds some brief discussions about potential factors contributing to the inconsistency observed in the previous findings such as difficulty in controlling the confounding effects of different developmental phases among study participants.

  19. LiTrack A Fast longitudinal phase space tracking code with graphical user interface

    CERN Document Server

    Emma, Paul

    2005-01-01

    Many linear accelerators, such as linac-based light sources and linear colliders, apply longitudinal phase space manipulations in their design, including electron bunch compression and wakefield-induced energy spread control. Several computer codes handle such issues, but most require detailed information on the transverse focusing lattice. In fact, in most linear accelerators, the transverse distributions do not significantly affect the longitudinal, and can be ignored initially. This allows the use of a fast 2D code to study longitudinal aspects without time-consuming considerations of the transverse focusing. LiTrack is based on a 15-year old code (same name) originally written by one of us (KB), which is now a MATLAB-based code with additional features, such as a graphical user interface and output plotting. The single-bunch tracking includes RF acceleration, bunch compression to 3rd order, geometric and resistive wakefields, aperture limits, synchrotron radiation, and flexible output plotting. The code w...

  20. Integrated burnup calculation code system SWAT

    International Nuclear Information System (INIS)

    Suyama, Kenya; Hirakawa, Naohiro; Iwasaki, Tomohiko.

    1997-11-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. It enables us to analyze the burnup problem using neutron spectrum depending on environment of irradiation, combining SRAC which is Japanese standard thermal reactor analysis code system and ORIGEN2 which is burnup code widely used all over the world. SWAT makes effective cross section library based on results by SRAC, and performs the burnup analysis with ORIGEN2 using that library. SRAC and ORIGEN2 can be called as external module. SWAT has original cross section library on based JENDL-3.2 and libraries of fission yield and decay data prepared from JNDC FP Library second version. Using these libraries, user can use latest data in the calculation of SWAT besides the effective cross section prepared by SRAC. Also, User can make original ORIGEN2 library using the output file of SWAT. This report presents concept and user's manual of SWAT. (author)

  1. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    Science.gov (United States)

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  2. Obtaining a radiation beam poly energy using the code Penelope 2006

    International Nuclear Information System (INIS)

    Andrade, Lucio das Chagas; Peixoto, Jose Guilherme Pereira

    2013-01-01

    Obtaining a spectrum X-ray is not a very easy task, one of the techniques used is the simulation by Monte Carlo method. The Penelope code is a code based on this method that simulates the transport of particles such as electrons, positrons and photons in different media and materials. The versions of this program in 2003 and 2006 show significant differences for facilitating the use of the code. The program allows the construction of the desired geometry and definitions of simulation parameters. (author)

  3. Four Year-Olds Use Norm-Based Coding for Face Identity

    Science.gov (United States)

    Jeffery, Linda; Read, Ainsley; Rhodes, Gillian

    2013-01-01

    Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged…

  4. Development of a code for the isotopic analysis of Uranium

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. H.; Kang, M. Y.; Kim, Jinhyeong; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of)

    2013-05-15

    To strengthen the national nuclear nonproliferation regime by an establishment of nuclear forensic system, the techniques for nuclear material analysis and the categorization of important domestic nuclear materials are being developed. MGAU and FRAM are commercial software for the isotopic analysis of Uranium by using γ-spectroscopy, but the diversity of detection geometry and some effects - self attenuation, coincidence summing, etc. - suggest an analysis tool under continual improvement and modification. Hence, developing another code for HPGe γ- and x-ray spectrum analysis is started in this study. The analysis of the 87-101 keV region of Uranium spectrum is attempted based on the isotopic responses similar to those developed in MGAU. The code for isotopic analysis of Uranium is started from a fitting.

  5. Realization of OSW/AWG-based bipolar wavelength time optical CDMA for wired wireless transmissions

    Science.gov (United States)

    Yen, Chih-Ta; Huang, Jen-Fa

    2009-01-01

    This study proposes a novel radio-over-fiber (RoF) system using two-dimensional (2-D) optical code-division multiple-access (OCDMA) scheme using pseudorandom (PN) codes for the time-spreading and wavelength-hopping ( t-spreading/ λ-hopping) codes. The 2-D system is implemented using optical switches (OSWs) and arrayed-waveguide grating (AWG) routers. By constructing 2-D codes using bipolar PN codes rather than unipolar codes provides a significant increase in the maximum permissible number of active radio base stations (RBSs). In general, the phase-induced intensity noise (PIIN) generated at high optical intensities significantly degrades the performance of a conventional multi-wavelength scheme. However, the OSW-based time-spreading method employed in the current 2-D OCDMA scheme effectively suppresses the PIIN effect. Additionally, multiple-access interference (MAI) is suppressed by the use of a wavelength/time balanced detector structure in the network receivers. The numerical evaluation results demonstrate that under PIIN- and MAI-limited conditions, the proposed system outperforms a conventional multi-wavelength OCDMA scheme by using the spectral spreading scheme to suppress beating noise. Especially, the t-spreading encoder/decoder (codec) groups share the same wavelength codec and the overall complexity is reduced and system network becomes more compact.

  6. Delay Estimation in Long-Code Asynchronous DS/CDMA Systems Using Multiple Antennas

    Directory of Open Access Journals (Sweden)

    Sirbu Marius

    2004-01-01

    Full Text Available The problem of propagation delay estimation in asynchronous long-code DS-CDMA multiuser systems is addressed. Almost all the methods proposed so far in the literature for propagation delay estimation are derived for short codes and the knowledge of the codes is exploited by the estimators. In long-code CDMA, the spreading code is aperiodic and the methods developed for short codes may not be used or may increase the complexity significantly. For example, in the subspace-based estimators, the aperiodic nature of the code may require subspace tracking. In this paper we propose a novel method for simultaneous estimation of the propagation delays of several active users. A specific multiple-input multiple-output (MIMO system model is constructed in a multiuser scenario. In such model the channel matrix contains information about both the users propagation delays and channel impulse responses. Consequently, estimates of the delays are obtained as a by-product of the channel estimation task. The channel matrix has a special structure that is exploited in estimating the delays. The proposed delay estimation method lends itself to an adaptive implementation. Thus, it may be applied to joint channel and delay estimation in uplink DS-CDMA analogously to the method presented by the authors in 2003. The performance of the proposed method is studied in simulation using realistic time-varying channel model and different SNR levels in the face of near-far effects, and using low spreading factor (high data rates.

  7. A computer code for calculation of radioactive nuclide generation and depletion, decay heat and γ ray spectrum. FPGS90

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Katakura, Jun-ichi; Nakagawa, Tsuneo

    1995-11-01

    In a nuclear reactor radioactive nuclides are generated and depleted with burning up of nuclear fuel. The radioactive nuclides, emitting γ ray and β ray, play role of radioactive source of decay heat in a reactor and radiation exposure. In safety evaluation of nuclear reactor and nuclear fuel cycle, it is needed to estimate the number of nuclides generated in nuclear fuel under various burn-up condition of many kinds of nuclear fuel used in a nuclear reactor. FPGS90 is a code calculating the number of nuclides, decay heat and spectrum of emitted γ ray from fission products produced in a nuclear fuel under the various kinds of burn-up condition. The nuclear data library used in FPGS90 code is the library 'JNDC Nuclear Data Library of Fission Products - second version -', which is compiled by working group of Japanese Nuclear Data Committee for evaluating decay heat in a reactor. The code has a function of processing a so-called evaluated nuclear data file such as ENDF/B, JENDL, ENSDF and so on. It also has a function of making figures of calculated results. Using FPGS90 code it is possible to do all works from making library, calculating nuclide generation and decay heat through making figures of the calculated results. (author)

  8. A computer code for calculation of radioactive nuclide generation and depletion, decay heat and {gamma} ray spectrum. FPGS90

    Energy Technology Data Exchange (ETDEWEB)

    Ihara, Hitoshi; Katakura, Jun-ichi; Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    In a nuclear reactor radioactive nuclides are generated and depleted with burning up of nuclear fuel. The radioactive nuclides, emitting {gamma} ray and {beta} ray, play role of radioactive source of decay heat in a reactor and radiation exposure. In safety evaluation of nuclear reactor and nuclear fuel cycle, it is needed to estimate the number of nuclides generated in nuclear fuel under various burn-up condition of many kinds of nuclear fuel used in a nuclear reactor. FPGS90 is a code calculating the number of nuclides, decay heat and spectrum of emitted {gamma} ray from fission products produced in a nuclear fuel under the various kinds of burn-up condition. The nuclear data library used in FPGS90 code is the library `JNDC Nuclear Data Library of Fission Products - second version -`, which is compiled by working group of Japanese Nuclear Data Committee for evaluating decay heat in a reactor. The code has a function of processing a so-called evaluated nuclear data file such as ENDF/B, JENDL, ENSDF and so on. It also has a function of making figures of calculated results. Using FPGS90 code it is possible to do all works from making library, calculating nuclide generation and decay heat through making figures of the calculated results. (author).

  9. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jin; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures.

  10. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2004-01-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures

  11. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  12. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  13. An Ontology-Based Tourism Recommender System Based on Spreading Activation Model

    Science.gov (United States)

    Bahramian, Z.; Abbaspour, R. Ali

    2015-12-01

    A tourist has time and budget limitations; hence, he needs to select points of interest (POIs) optimally. Since the available information about POIs is overloading, it is difficult for a tourist to select the most appreciate ones considering preferences. In this paper, a new travel recommender system is proposed to overcome information overload problem. A recommender system (RS) evaluates the overwhelming number of POIs and provides personalized recommendations to users based on their preferences. A content-based recommendation system is proposed, which uses the information about the user's preferences and POIs and calculates a degree of similarity between them. It selects POIs, which have highest similarity with the user's preferences. The proposed content-based recommender system is enhanced using the ontological information about tourism domain to represent both the user profile and the recommendable POIs. The proposed ontology-based recommendation process is performed in three steps including: ontology-based content analyzer, ontology-based profile learner, and ontology-based filtering component. User's feedback adapts the user's preferences using Spreading Activation (SA) strategy. It shows the proposed recommender system is effective and improves the overall performance of the traditional content-based recommender systems.

  14. AN ONTOLOGY-BASED TOURISM RECOMMENDER SYSTEM BASED ON SPREADING ACTIVATION MODEL

    Directory of Open Access Journals (Sweden)

    Z. Bahramian

    2015-12-01

    Full Text Available A tourist has time and budget limitations; hence, he needs to select points of interest (POIs optimally. Since the available information about POIs is overloading, it is difficult for a tourist to select the most appreciate ones considering preferences. In this paper, a new travel recommender system is proposed to overcome information overload problem. A recommender system (RS evaluates the overwhelming number of POIs and provides personalized recommendations to users based on their preferences. A content-based recommendation system is proposed, which uses the information about the user’s preferences and POIs and calculates a degree of similarity between them. It selects POIs, which have highest similarity with the user’s preferences. The proposed content-based recommender system is enhanced using the ontological information about tourism domain to represent both the user profile and the recommendable POIs. The proposed ontology-based recommendation process is performed in three steps including: ontology-based content analyzer, ontology-based profile learner, and ontology-based filtering component. User’s feedback adapts the user’s preferences using Spreading Activation (SA strategy. It shows the proposed recommender system is effective and improves the overall performance of the traditional content-based recommender systems.

  15. Stability of the spreading in small-world network with predictive controller

    International Nuclear Information System (INIS)

    Bao, Z.J.; Jiang, Q.Y.; Yan, W.J.; Cao, Y.J.

    2010-01-01

    In this Letter, we apply the predictive control strategy to suppress the propagation of diseases or viruses in small-world network. The stability of small-world spreading model with predictive controller is investigated. The sufficient and necessary stability condition is given, which is closely related to the controller parameters and small-world rewiring probability p. Our simulations discover a phenomenon that, with the fixed predictive controller parameters, the spreading dynamics become more and more stable when p decreases from a larger value to a smaller one, and the suitable controller parameters can effectively suppress the spreading behaviors even when p varies within the whole spectrum, and the unsuitable controller parameters can lead to oscillation when p lies within a certain range.

  16. Upgrade of hybrid fibre coax networks towards bi-directional access

    NARCIS (Netherlands)

    Khoe, G.D.; Wolters, R.P.C.; Boom, van den H.P.A.; Prati, G.

    1997-01-01

    In this paper we describe an upgrade scenario for Hybrid Fibre Coax (HFC) CATV Networks towards hi-directional access. The communication system described has been newly designed, and is based on the use of Direct Sequence- Code Division Multiple-Access (DS-CDMA). Due to its spread-spectrum

  17. Reverse preferential spread in complex networks

    Science.gov (United States)

    Toyoizumi, Hiroshi; Tani, Seiichi; Miyoshi, Naoto; Okamoto, Yoshio

    2012-08-01

    Large-degree nodes may have a larger influence on the network, but they can be bottlenecks for spreading information since spreading attempts tend to concentrate on these nodes and become redundant. We discuss that the reverse preferential spread (distributing information inversely proportional to the degree of the receiving node) has an advantage over other spread mechanisms. In large uncorrelated networks, we show that the mean number of nodes that receive information under the reverse preferential spread is an upper bound among any other weight-based spread mechanisms, and this upper bound is indeed a logistic growth independent of the degree distribution.

  18. Comparison of Radiation Transport Codes, HZETRN, HETC and FLUKA, Using the 1956 Webber SPE Spectrum

    Science.gov (United States)

    Heinbockel, John H.; Slaba, Tony C.; Blattnig, Steve R.; Tripathi, Ram K.; Townsend, Lawrence W.; Handler, Thomas; Gabriel, Tony A.; Pinsky, Lawrence S.; Reddell, Brandon; Clowdsley, Martha S.; hide

    2009-01-01

    Protection of astronauts and instrumentation from galactic cosmic rays (GCR) and solar particle events (SPE) in the harsh environment of space is of prime importance in the design of personal shielding, spacec raft, and mission planning. Early entry of radiation constraints into the design process enables optimal shielding strategies, but demands efficient and accurate tools that can be used by design engineers in every phase of an evolving space project. The radiation transport code , HZETRN, is an efficient tool for analyzing the shielding effectiveness of materials exposed to space radiation. In this paper, HZETRN is compared to the Monte Carlo codes HETC-HEDS and FLUKA, for a shield/target configuration comprised of a 20 g/sq cm Aluminum slab in front of a 30 g/cm^2 slab of water exposed to the February 1956 SPE, as mode led by the Webber spectrum. Neutron and proton fluence spectra, as well as dose and dose equivalent values, are compared at various depths in the water target. This study shows that there are many regions where HZETRN agrees with both HETC-HEDS and FLUKA for this shield/target configuration and the SPE environment. However, there are also regions where there are appreciable differences between the three computer c odes.

  19. Efficient pseudorandom generators based on the DDH assumption

    NARCIS (Netherlands)

    Rezaeian Farashahi, R.; Schoenmakers, B.; Sidorenko, A.; Okamoto, T.; Wang, X.

    2007-01-01

    A family of pseudorandom generators based on the decisional Diffie-Hellman assumption is proposed. The new construction is a modified and generalized version of the Dual Elliptic Curve generator proposed by Barker and Kelsey. Although the original Dual Elliptic Curve generator is shown to be

  20. The DIT nuclear fuel assembly physics design code

    International Nuclear Information System (INIS)

    Jonsson, A.

    1988-01-01

    The DIT code is the Combustion Engineering, Inc. (C-E) nuclear fuel assembly design code. It belongs to a class of codes, all similar in structure and strategy, that may be characterized by the spectrum and spatial calculations being performed in two dimensions and in a single job step for the entire assembly. The forerunner of this class of codes is the United Kingdom Atomic Energy Authority WIMS code, the first version of which was completed 25 yr ago. The structure and strategy of assembly spectrum codes have remained remarkably similar to the original concept thus proving its usefulness. As other organizations, including C-E, have developed their own versions of the concept, many important variations have been added that significantly influence the accuracy and performance of the resulting computational tool. Those features, which are unique to the DIT code and which might be of interest to the community of fuel assembly physics design code users and developers, are described and discussed

  1. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  2. Dengue fever spreading based on probabilistic cellular automata with two lattices

    Science.gov (United States)

    Pereira, F. M. M.; Schimit, P. H. T.

    2018-06-01

    Modeling and simulation of mosquito-borne diseases have gained attention due to a growing incidence in tropical countries in the past few years. Here, we study the dengue spreading in a population modeled by cellular automata, where there are two lattices to model the human-mosquitointeraction: one lattice for human individuals, and one lattice for mosquitoes in order to enable different dynamics in populations. The disease considered is the dengue fever with one, two or three different serotypes coexisting in population. Although many regions exhibit the incidence of only one serotype, here we set a complete framework to also study the occurrence of two and three serotypes at the same time in a population. Furthermore, the flexibility of the model allows its use to other mosquito-borne diseases, like chikungunya, yellow fever and malaria. An approximation of the cellular automata is proposed in terms of ordinary differential equations; the spreading of mosquitoes is studied and the influence of some model parameters are analyzed with numerical simulations. Finally, a method to combat dengue spreading is simulated based on a reduction of mosquito birth and mosquito bites in population.

  3. Mesh-based parallel code coupling interface

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, K.; Steckel, B. (eds.) [GMD - Forschungszentrum Informationstechnik GmbH, St. Augustin (DE). Inst. fuer Algorithmen und Wissenschaftliches Rechnen (SCAI)

    2001-04-01

    MpCCI (mesh-based parallel code coupling interface) is an interface for multidisciplinary simulations. It provides industrial end-users as well as commercial code-owners with the facility to combine different simulation tools in one environment. Thereby new solutions for multidisciplinary problems will be created. This opens new application dimensions for existent simulation tools. This Book of Abstracts gives a short overview about ongoing activities in industry and research - all presented at the 2{sup nd} MpCCI User Forum in February 2001 at GMD Sankt Augustin. (orig.) [German] MpCCI (mesh-based parallel code coupling interface) definiert eine Schnittstelle fuer multidisziplinaere Simulationsanwendungen. Sowohl industriellen Anwender als auch kommerziellen Softwarehersteller wird mit MpCCI die Moeglichkeit gegeben, Simulationswerkzeuge unterschiedlicher Disziplinen miteinander zu koppeln. Dadurch entstehen neue Loesungen fuer multidisziplinaere Problemstellungen und fuer etablierte Simulationswerkzeuge ergeben sich neue Anwendungsfelder. Dieses Book of Abstracts bietet einen Ueberblick ueber zur Zeit laufende Arbeiten in der Industrie und in der Forschung, praesentiert auf dem 2{sup nd} MpCCI User Forum im Februar 2001 an der GMD Sankt Augustin. (orig.)

  4. Neisseria gonorrhoeae strain with reduced susceptibilities to extended-spectrum cephalosporins.

    Science.gov (United States)

    Nguyen, Duylinh; Gose, Severin; Castro, Lina; Chung, Kathleen; Bernstein, Kyle; Samuel, Micheal; Bauer, Heidi; Pandori, Mark

    2014-07-01

    The spread of Neisseria gonorrhoeae strains with reduced susceptibility to extended-spectrum cephalosporins is an increasing public health threat. Using Etest and multiantigen sequence typing, we detected sequence type 1407, which is associated with reduced susceptibilities to extended-spectrum cephalosporins, in 4 major populated regions in California, USA, in 2012.

  5. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  6. Analyses of corium spreading in Mark I containment geometry

    International Nuclear Information System (INIS)

    Sienicki, J.J.; Chu, C.C.; Farmer, M.T.

    1991-01-01

    An assessment of melt spreading in the Mark I system has been carried out using the MELTSPREAD-1 computer code together with supporting analyses. Application of MELTSPREAD-1 confirms the calculation of shell survival in a wet containment for the most probable melt release conditions from NUREG/CR-5423. According to MELTSPREAD-1, a dry containment also may not be threatened by melt spreading. This reflects the heat losses undergone by the melt in the process of spreading to the shell conservatively neglected in NUREG/CR-5423. However, there exist parameter ranges outside the most probable set where shell failure may be calculated. Accounting for the breakup and quenching of melt relocating through a deep layer of subcooled water also conservatively neglected in NUREG/CR-5423 can reduce the set of parameter variations for which containment failure is calculated in the wet case

  7. Development of System Based Code: Case Study of Life-Cycle Margin Evaluation

    International Nuclear Information System (INIS)

    Tai Asayama; Masaki Morishita; Masanori Tashimo

    2006-01-01

    For a leap of progress in structural deign of nuclear plant components, The late Professor Emeritus Yasuhide Asada proposed the System Based Code. The key concepts of the System Based Code are; (1) life-cycle margin optimization, (2) expansion of technical options as well as combinations of technical options beyond the current codes and standards, and (3) designing to clearly defined target reliabilities. Those concepts are very new to most of the nuclear power plant designers who are naturally obliged to design to current codes and standards; the application of the concepts of the System Based Code to design will lead to entire change of practices that designers have long been accustomed to. On the other hand, experienced designers are supposed to have expertise that can support and accelerate the development of the System Based Code. Therefore, interfacing with experienced designers is of crucial importance for the development of the System Based Code. The authors conducted a survey on the acceptability of the System Based Code concept. The results were analyzed from the possibility of improving structural design both in terms of reliability and cost effectiveness by the introduction of the System Based Code concept. It was concluded that the System Based Code is beneficial for those purposes. Also described is the expertise elicited from the results of the survey that can be reflected to the development of the System Based Code. (authors)

  8. COSINE software development based on code generation technology

    International Nuclear Information System (INIS)

    Ren Hao; Mo Wentao; Liu Shuo; Zhao Guang

    2013-01-01

    The code generation technology can significantly improve the quality and productivity of software development and reduce software development risk. At present, the code generator is usually based on UML model-driven technology, which can not satisfy the development demand of nuclear power calculation software. The feature of scientific computing program was analyzed and the FORTRAN code generator (FCG) based on C# was developed in this paper. FCG can generate module variable definition FORTRAN code automatically according to input metadata. FCG also can generate memory allocation interface for dynamic variables as well as data access interface. FCG was applied to the core and system integrated engine for design and analysis (COSINE) software development. The result shows that FCG can greatly improve the development efficiency of nuclear power calculation software, and reduce the defect rate of software development. (authors)

  9. Premorbid neurocognitive functioning in schizophrenia spectrum disorder

    DEFF Research Database (Denmark)

    Sørensen, Holger J; Mortensen, Erik L; Parnas, Josef

    2006-01-01

    in adolescence, the aim of the present prospective study was to examine whether low scores on Coding is associated with the risk of developing schizophrenia spectrum disorders. The 12 subtests of the WISC were administered to 311 children and adolescents with a mean age of 15.1 years (range: 8 to 20 years...... was 0.97 (95% CI 0.94-1.00) (p = .022), and the risk of schizophrenia spectrum disorder decreased by 3% (95% CI 6 to 0%). The Coding deficit on the WISC may indicate deficits in perceptual motor speed or in working memory processing speed in young individuals who later develop schizophrenia, schizotypal...... personality disorder, or other disorders within the schizophrenia spectrum....

  10. Spread-out Bragg peak and monitor units calculation with the Monte Carlo Code MCNPX

    International Nuclear Information System (INIS)

    Herault, J.; Iborra, N.; Serrano, B.; Chauvel, P.

    2007-01-01

    The aim of this work was to study the dosimetric potential of the Monte Carlo code MCNPX applied to the protontherapy field. For series of clinical configurations a comparison between simulated and experimental data was carried out, using the proton beam line of the MEDICYC isochronous cyclotron installed in the Centre Antoine Lacassagne in Nice. The dosimetric quantities tested were depth-dose distributions, output factors, and monitor units. For each parameter, the simulation reproduced accurately the experiment, which attests the quality of the choices made both in the geometrical description and in the physics parameters for beam definition. These encouraging results enable us today to consider a simplification of quality control measurements in the future. Monitor Units calculation is planned to be carried out with preestablished Monte Carlo simulation data. The measurement, which was until now our main patient dose calibration system, will be progressively replaced by computation based on the MCNPX code. This determination of Monitor Units will be controlled by an independent semi-empirical calculation

  11. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    Science.gov (United States)

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  12. The Dit nuclear fuel assembly physics design code

    International Nuclear Information System (INIS)

    Jonsson, A.

    1987-01-01

    DIT is the Combustion Engineering, Inc. (C-E) nuclear fuel assembly design code. It belongs to a class of codes, all similar in structure and strategy, which may be characterized by the spectrum and spatial calculations being performed in 2D and in a single job step for the entire assembly. The forerunner of this class of codes is the U.K.A.E.A. WIMS code, the first version of which was completed 25 years ago. The structure and strategy of assembly spectrum codes have remained remarkably similar to the original concept thus proving its usefulness. As other organizations, including C-E, have developed their own versions of the concept, many important variations have been added which significantly influence the accuracy and performance of the resulting computational tool. This paper describes and discusses those features which are unique to the DIT code and which might be of interest to the community of fuel assembly physics design code users and developers

  13. Corium spreading issue; Le corium et son etalement

    Energy Technology Data Exchange (ETDEWEB)

    Cognet, G.; Brayer, C.; Cranga, M.; Journeau, C.; Laffont, G.; Splinder, B.; Veteau, J.M. [CEA Grenoble, Dept. de Thermohydraulique et de Physique (DPT), 38 (France)

    1999-07-01

    Safety is one of the major issues for nuclear power plants; its improvement is a constant R and D axis for the CEA. In the event of a highly unlikely core melt-down accident in Light Water Reactors, the Safety Authorities of several EU countries have requested the industries and utilities to consider severe accidents with reactor pressure vessel failure for the design of the next generation of nuclear power plants. The objective is to preserve the integrity of the containment as the main barrier of fission product release to the environment. This can only be achieved if the core melt mixture (called corium, essentially composed of UO{sub 2}, ZrO{sub 2}, Zr, Fe and fission products) is stabilized before it can penetrate the basement. Consequently, various core-catcher concepts are under investigation for future reactors in order to prevent basement erosion, and to stabilize and control the corium within the containment. In particular, in the EPR (European Pressurized Reactor) core-catcher concept, the corium is mixed with a special concrete, and the molten mixture spread over a large multi-layer surface cooled from the bottom; with subsequent cooling by flooding with water. Therefore, melt spreading requires intensive investigation in order to determine and quantify the key phenomena, which govern the spreading. For some years now, the Nuclear Reactor Division of the Atomic Energy Commission (CEA/DRN) has been conducting a large program to improve knowledge on corium behaviour and coolability. This program is based on experimental (with simulant and prototypic materials) and theoretical investigations, which are finally gathered into scenario and mechanistic computer codes. Within this framework, a large part is currently devoted to the study of corium spreading. After a reminder of the general objectives and a description of the DRn approach and facilities, this paper presents the most important results. (authors)

  14. Assessment of US NRC fuel rod behavior codes to extended burnup

    International Nuclear Information System (INIS)

    Laats, E.T.; Croucher, D.W.; Haggag, F.M.

    1982-01-01

    The purpose of this paper is to report the status of assessing the capabilities of the NRC fuel rod performance codes for calculating extended burnup rod behavior. As part of this effort, a large spectrum of fuel rod behavior phenomena was examined, and the phenomena deemed as being influential during extended burnup operation were identified. Then, the experiment data base addressing these identified phenomena was examined for availability and completeness at extended burnups. Calculational capabilities of the NRC's steady state FRAPCON-2 and transient FRAP-T6 fuel rod behavior codes were examined for each of the identified phenomenon. Parameters calculated by the codes were compared with the available data base, and judgments were made regarding model performance. Overall, the FRAPCON-2 code was found to be moderately well assessed to extended burnups, but the FRAP-T6 code cannot be adequately assessed until more transient high burnup data are available

  15. Coding Transparency in Object-Based Video

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    A novel algorithm for coding gray level alpha planes in object-based video is presented. The scheme is based on segmentation in multiple layers. Different coders are specifically designed for each layer. In order to reduce the bit rate, cross-layer redundancies as well as temporal correlation are...

  16. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2009-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and

  17. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2008-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and

  18. Evaluating crown fire rate of spread predictions from physics-based models

    Science.gov (United States)

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  19. Information spreading in Delay Tolerant Networks based on nodes' behaviors

    Science.gov (United States)

    Wu, Yahui; Deng, Su; Huang, Hongbin

    2014-07-01

    Information spreading in DTNs (Delay Tolerant Networks) adopts a store-carry-forward method, and nodes receive the message from others directly. However, it is hard to judge whether the information is safe in this communication mode. In this case, a node may observe other nodes' behaviors. At present, there is no theoretical model to describe the varying rule of the nodes' trusting level. In addition, due to the uncertainty of the connectivity in DTN, a node is hard to get the global state of the network. Therefore, a rational model about the node's trusting level should be a function of the node's own observing result. For example, if a node finds k nodes carrying a message, it may trust the information with probability p(k). This paper does not explore the real distribution of p(k), but instead presents a unifying theoretical framework to evaluate the performance of the information spreading in above case. This framework is an extension of the traditional SI (susceptible-infected) model, and is useful when p(k) conforms to any distribution. Simulations based on both synthetic and real motion traces show the accuracy of the framework. Finally, we explore the impact of the nodes' behaviors based on certain special distributions through numerical results.

  20. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  1. Performance enhancement of optical code-division multiple-access systems using transposed modified Walsh code

    Science.gov (United States)

    Sikder, Somali; Ghosh, Shila

    2018-02-01

    This paper presents the construction of unipolar transposed modified Walsh code (TMWC) and analysis of its performance in optical code-division multiple-access (OCDMA) systems. Specifically, the signal-to-noise ratio, bit error rate (BER), cardinality, and spectral efficiency were investigated. The theoretical analysis demonstrated that the wavelength-hopping time-spreading system using TMWC was robust against multiple-access interference and more spectrally efficient than systems using other existing OCDMA codes. In particular, the spectral efficiency was calculated to be 1.0370 when TMWC of weight 3 was employed. The BER and eye pattern for the designed TMWC were also successfully obtained using OptiSystem simulation software. The results indicate that the proposed code design is promising for enhancing network capacity.

  2. [A quick algorithm of dynamic spectrum photoelectric pulse wave detection based on LabVIEW].

    Science.gov (United States)

    Lin, Ling; Li, Na; Li, Gang

    2010-02-01

    Dynamic spectrum (DS) detection is attractive among the numerous noninvasive blood component detection methods because of the elimination of the main interference of the individual discrepancy and measure conditions. DS is a kind of spectrum extracted from the photoelectric pulse wave and closely relative to the artery blood. It can be used in a noninvasive blood component concentration examination. The key issues in DS detection are high detection precision and high operation speed. The precision of measure can be advanced by making use of over-sampling and lock-in amplifying on the pick-up of photoelectric pulse wave in DS detection. In the present paper, the theory expression formula of the over-sampling and lock-in amplifying method was deduced firstly. Then in order to overcome the problems of great data and excessive operation brought on by this technology, a quick algorithm based on LabVIEW and a method of using external C code applied in the pick-up of photoelectric pulse wave were presented. Experimental verification was conducted in the environment of LabVIEW. The results show that by the method pres ented, the speed of operation was promoted rapidly and the data memory was reduced largely.

  3. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  4. FWM behavior in 2-D time-spreading wavelength-hopping OCDMA systems

    Science.gov (United States)

    Bazan, Taher M.

    2017-03-01

    A new formula for the signal-to-four-wave mixing (FWM) crosstalk in 2-D time-spreading wavelength-hopping (TW) optical code division multiple access (OCDMA) systems is derived. The influence of several system parameters on the signal-to-FWM crosstalk ratio (SXR) is analyzed, including transmitted power per chip, code length, the number of active users, code weight, wavelength spacing, and transmission distance. Furthermore, for the first time, a closed-form expression for the total number of possible FWM products employing symmetric TW codes with equal wavelength spacing is investigated. The results show that SXR is sensitive to minor variations in system parameters, especially the launched power level and the code length while the wavelength spacing has a less impact on the level of the generated FWM power.

  5. Research on formation of microsatellite communication with genetic algorithm.

    Science.gov (United States)

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.

  6. Neutron spectrum unfolding: Pt. 2

    International Nuclear Information System (INIS)

    Matiullah; Wiyaja, D.S.; Berzonis, M.A.; Bondars, H.; Lapenas, A.A.; Kudo, K.; Majeed, A.; Durrani, S.A.

    1991-01-01

    In Part I of this paper, we described the use of the computer code SAIPS in neutron spectrum unfolding. Here in Part II, we present our experimental work carried out to study the shape of the neutron spectrum in different experimental channels of a 5 MW light-water cooled and moderated research reactor. The spectral neutron flux was determined using various fission foils (placed in close contact with mica track detectors) and activation detectors. From the measured activities, the neutron spectrum was unfolded by SAIPS. (author)

  7. Vertical Footbridge Vibrations: The Response Spectrum Methodology

    DEFF Research Database (Denmark)

    Georgakis, Christos; Ingólfsson, Einar Thór

    2008-01-01

    In this paper, a novel, accurate and readily codifiable methodology for the prediction of vertical footbridge response is presented. The methodology is based on the well-established response spectrum approach used in the majority of the world’s current seismic design codes of practice. The concept...... of a universally applicable reference response spectrum is introduced, from which the pedestrian-induced vertical response of any footbridge may be determined, based on a defined “event” and the probability of occurrence of that event. A series of Monte Carlo simulations are undertaken for the development...... period is introduced and its implication on the calculation of footbridge response is discussed. Finally, a brief comparison is made between the theoretically predicted pedestrian-induced vertical response of an 80m long RC footbridge (as an example) and actual field measurements. The comparison shows...

  8. Gyrokinetic Studies of Turbulence in Steep Gradient Region: Role of Turbulence Spreading and E x B Shear

    Energy Technology Data Exchange (ETDEWEB)

    T.S. Hahm; Z. Lin; P.H. Diamond; G. Rewoldt; W.X. Wang; S. Ethier; O. Gurcan; W.W. Lee; W.M. Tang

    2004-12-21

    An integrated program of gyrokinetic particle simulation and theory has been developed to investigate several outstanding issues in both turbulence and neoclassical physics. Gyrokinetic particle simulations of toroidal ion temperature gradient (ITG) turbulence spreading using the GTC code and its related dynamical model have been extended to the case with radially increasing ion temperature gradient, to study the inward spreading of edge turbulence toward the core. Due to turbulence spreading from the edge, the turbulence intensity in the core region is significantly enhanced over the value obtained from simulations of the core region only. Even when the core gradient is within the Dimits shift regime (i.e., self-generated zonal flows reduce the transport to a negligible value), a significant level of turbulence and transport is observed in the core due to spreading from the edge. The scaling of the turbulent front propagation speed is closer to the prediction from our nonlinear diffusion model than one based on linear toroidal coupling. A calculation of ion poloidal rotation in the presence of sharp density and toroidal angular rotation frequency gradients from the GTC-Neo particle simulation code shows that the results are significantly different from the conventional neoclassical theory predictions. An energy conserving set of a fully electromagnetic nonlinear gyrokinetic Vlasov equation and Maxwell's equations, which is applicable to edge turbulence, is being derived via the phase-space action variational Lie perturbation method. Our generalized ordering takes the ion poloidal gyroradius to be on the order of the radial electric field gradient length.

  9. Gyrokinetic studies of turbulence in steep gradient region: Role of turbulence spreading and E x B shear

    International Nuclear Information System (INIS)

    Hahm, T.S.; Lin, Z.; Diamond, P.H.; Gurcan, O.; Rewoldt, G.; Wang, W.X.; Ethier, S.; Lee, W.W.; Lewandowski, J.L.V.; Tang, W.M.

    2005-01-01

    An integrated program of gyrokinetic particle simulation and theory has been developed to investigate several outstanding issues in both turbulence and neoclassical physics. Gyrokinetic particle simulations of toroidal ion temperature gradient (ITG) turbulence spreading using the GTC code and its related dynamical model have been extended to the case with radially increasing ion temperature gradient, to study the inward spreading of edge turbulence toward the core. Due to turbulence spreading from the edge, the turbulence intensity in the core region is significantly enhanced over the value obtained from simulations of the core region only. Even when the core gradient is within the Dimits shift regime (i.e., self-generated zonal flows reduce the transport to a negligible value), a significant level of turbulence and transport is observed in the core due to spreading from the edge. The scaling of the turbulent front propagation speed is closer to the prediction from our nonlinear diffusion model than one based on linear toroidal coupling. A calculation of ion poloidal rotation in the presence of sharp density and toroidal angular rotation frequency gradients from the GTC-Neo particle simulation code shows that the results are significantly different from the conventional neoclassical theory predictions. An energy conserving set of a fully electromagnetic nonlinear gyrokinetic Vlasov equation and Maxwell's equations, which is applicable to edge turbulence, is being derived via the phase-space action variational Lie perturbation method. Our generalized ordering takes the ion poloidal gyroradius to be on the order of the radial electric field gradient length. (author)

  10. Design of ACM system based on non-greedy punctured LDPC codes

    Science.gov (United States)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  11. Spread F bubbles - Nonlinear Rayleigh-Taylor mode in two dimensions

    Science.gov (United States)

    Hudson, M. K.

    1978-01-01

    The paper discusses long-wavelength developed bottomside spread F which has been attributed to the Rayleigh-Taylor instability. The nonlinear saturation amplitude and the k spectrum of the inertia-dominated Rayleigh-Taylor instability is found in two directions: east-west and vertical. As in the collisional case (Chaturvedi and Ossakow, 1977), the dominant nonlinearity is found to be two-dimensional. It is found that the linearly most unstable modes, which are primarily horizontal, saturate by the nonlinear generation of vertical spatial harmonics. The harmonics are damped by diffusion or recombination. The resulting amplitude spectrum indicates that bubbles are vertically elongated in both inertial and collisional regimes.

  12. Improvement of calculation method for temperature coefficient of HTTR by neutronics calculation code based on diffusion theory. Analysis for temperature coefficient by SRAC code system

    International Nuclear Information System (INIS)

    Goto, Minoru; Takamatsu, Kuniyoshi

    2007-03-01

    The HTTR temperature coefficients required for the core dynamics calculations had been calculated from the HTTR core calculation results by the diffusion code with which the corrections had been performed using the core calculation results by the Monte-Carlo code MVP. This calculation method for the temperature coefficients was considered to have some issues to be improved. Then, the calculation method was improved to obtain the temperature coefficients in which the corrections by the Monte-Carlo code were not required. Specifically, from the point of view of neutron spectrum calculated by lattice calculations, the lattice model was revised which had been used for the calculations of the temperature coefficients. The HTTR core calculations were performed by the diffusion code with the group constants which were generated by the lattice calculations with the improved lattice model. The core calculations and the lattice calculations were performed by the SRAC code system. The HTTR core dynamics calculation was performed with the temperature coefficient obtained from the core calculation results. In consequence, the core dynamics calculation result showed good agreement with the experimental data and the valid temperature coefficient could be calculated only by the diffusion code without the corrections by Monte-Carlo code. (author)

  13. Development of Monte Carlo-based pebble bed reactor fuel management code

    International Nuclear Information System (INIS)

    Setiadipura, Topan; Obara, Toru

    2014-01-01

    Highlights: • A new Monte Carlo-based fuel management code for OTTO cycle pebble bed reactor was developed. • The double-heterogeneity was modeled using statistical method in MVP-BURN code. • The code can perform analysis of equilibrium and non-equilibrium phase. • Code-to-code comparisons for Once-Through-Then-Out case were investigated. • Ability of the code to accommodate the void cavity was confirmed. - Abstract: A fuel management code for pebble bed reactors (PBRs) based on the Monte Carlo method has been developed in this study. The code, named Monte Carlo burnup analysis code for PBR (MCPBR), enables a simulation of the Once-Through-Then-Out (OTTO) cycle of a PBR from the running-in phase to the equilibrium condition. In MCPBR, a burnup calculation based on a continuous-energy Monte Carlo code, MVP-BURN, is coupled with an additional utility code to be able to simulate the OTTO cycle of PBR. MCPBR has several advantages in modeling PBRs, namely its Monte Carlo neutron transport modeling, its capability of explicitly modeling the double heterogeneity of the PBR core, and its ability to model different axial fuel speeds in the PBR core. Analysis at the equilibrium condition of the simplified PBR was used as the validation test of MCPBR. The calculation results of the code were compared with the results of diffusion-based fuel management PBR codes, namely the VSOP and PEBBED codes. Using JENDL-4.0 nuclide library, MCPBR gave a 4.15% and 3.32% lower k eff value compared to VSOP and PEBBED, respectively. While using JENDL-3.3, MCPBR gave a 2.22% and 3.11% higher k eff value compared to VSOP and PEBBED, respectively. The ability of MCPBR to analyze neutron transport in the top void of the PBR core and its effects was also confirmed

  14. BRT-1 code for IBM 370/135

    International Nuclear Information System (INIS)

    Preda, I.

    1976-01-01

    BRT-1 is a transport code to obtain the thermal neutrons spectrum, point dependent, in one reactor cell. The code BRT-1 described in this paper, is the code BRT-1, written in FORTRAN IV language for the computer UNIVAC 1108 with CSCX operating system, converted for the computer IBM 370/135 disk operating system. (author)

  15. Optimization of the Penelope code in F language for the simulation of the X-ray spectrum in radiodiagnosis

    International Nuclear Information System (INIS)

    Ballon P, C. I.; Quispe V, N. Y.; Vega R, J. L. J.

    2017-10-01

    The computational simulation to obtain the X-ray spectrum in the range of radio-diagnosis, allows a study and advance knowledge of the transport process of X-rays in the interaction with matter using the Monte Carlo method. With the obtaining of the X-ray spectra we can know the dose that the patient receives when he undergoes a radiographic study or CT, improving the quality of the obtained image. The objective of the present work was to implement and optimize the open source Penelope (Monte Carlo code for the simulation of the transport of electrons and photons in the matter) 2008 version programming extra code in functional language F, managing to double the processing speed, thus reducing the simulation time spent and errors when optimizing the software initially programmed in Fortran 77. The results were compared with those of Penelope, obtaining a good concordance. We also simulated the obtaining of a Pdd curve (depth dose profile) for a Theratron Equinox cobalt-60 teletherapy device, also validating the software implemented for high energies. (Author)

  16. Spread effects - methodology; Spredningseffekter - metodegrunnlag

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    Diffusion of technology, environmental effects and rebound effects are the principal effects from the funding of renewable energy and energy economising. It is difficult to estimate the impact of the spread effects both prior to the measures are implemented and after the measures are carried out. Statistical methods can be used to estimate the spread effects, but they are insecure and always need to be complemented with qualitative and subjective evaluations. It is more adequate to evaluate potential spread effects from market and market data surveillance for a selection of technologies and parties. Based on this information qualitative indicators for spread effects can be constructed and used both ex ante and ex post (ml)

  17. [Molecular characterization of resistance mechanisms: methicillin resistance Staphylococcus aureus, extended spectrum β-lactamases and carbapenemases].

    Science.gov (United States)

    Oteo, Jesús; Belén Aracil, María

    2015-07-01

    Multi-drug resistance in bacterial pathogens increases morbidity and mortality in infected patients and it is a threat to public health concern by their high capacity to spread. For both reasons, the rapid detection of multi-drug resistant bacteria is critical. Standard microbiological procedures require 48-72 h to provide the antimicrobial susceptibility results, thus there is emerging interest in the development of rapid detection techniques. In recent years, the use of selective and differential culture-based methods has widely spread. However, the capacity for detecting antibiotic resistance genes and their low turnaround times has made molecular methods a reference for diagnosis of multidrug resistance. This review focusses on the molecular methods for detecting some mechanisms of antibiotic resistance with a high clinical and epidemiological impact: a) Enzymatic resistance to broad spectrum β-lactam antibiotics in Enterobacteriaceae, mainly extended spectrum β-lactamases (ESBL) and carbapenemases; and b) methicillin resistance in Staphylococcus aureus. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  18. Flame Spread and Group-Combustion Excitation in Randomly Distributed Droplet Clouds with Low-Volatility Fuel near the Excitation Limit: a Percolation Approach Based on Flame-Spread Characteristics in Microgravity

    Science.gov (United States)

    Mikami, Masato; Saputro, Herman; Seo, Takehiko; Oyagi, Hiroshi

    2018-03-01

    Stable operation of liquid-fueled combustors requires the group combustion of fuel spray. Our study employs a percolation approach to describe unsteady group-combustion excitation based on findings obtained from microgravity experiments on the flame spread of fuel droplets. We focus on droplet clouds distributed randomly in three-dimensional square lattices with a low-volatility fuel, such as n-decane in room-temperature air, where the pre-vaporization effect is negligible. We also focus on the flame spread in dilute droplet clouds near the group-combustion-excitation limit, where the droplet interactive effect is assumed negligible. The results show that the occurrence probability of group combustion sharply decreases with the increase in mean droplet spacing around a specific value, which is termed the critical mean droplet spacing. If the lattice size is at smallest about ten times as large as the flame-spread limit distance, the flame-spread characteristics are similar to those over an infinitely large cluster. The number density of unburned droplets remaining after completion of burning attained maximum around the critical mean droplet spacing. Therefore, the critical mean droplet spacing is a good index for stable combustion and unburned hydrocarbon. In the critical condition, the flame spreads through complicated paths, and thus the characteristic time scale of flame spread over droplet clouds has a very large value. The overall flame-spread rate of randomly distributed droplet clouds is almost the same as the flame-spread rate of a linear droplet array except over the flame-spread limit.

  19. LiTrack: A Fast Longitudinal Phase Space Tracking Code with Graphical User Interface

    International Nuclear Information System (INIS)

    Bane, K.L.F.

    2005-01-01

    Linac-based light sources and linear colliders typically apply longitudinal phase space manipulations in their design, including electron bunch compression and wakefield-induced energy spread control. Several computer codes handle such issues, but most also require detailed information on the transverse focusing lattice. In fact, in most linear accelerators, the transverse distributions do not significantly affect the longitudinal, and can be ignored initially. This allows the use of a fast 2D code to study longitudinal aspects without time-consuming considerations of the transverse focusing. LiTrack is based on a 15-year old code (same name) originally written by one of us (KB), which is now a Matlab [1] code with additional features, such as graphical user interface, prompt output plotting, and functional call within a script. This single-bunch tracking code includes RF acceleration, bunch compression to 3rd order, geometric and resistive short-range wakefields, aperture limits, synchrotron radiation, and flexible output plotting. The code was used to design both the LCLS [2] and the SPPS [3] projects at SLAC and typically runs 10 5 particles in < 1 minute. We describe the features, show some examples, and provide free access to the code

  20. Fire and Heat Spreading Model Based on Cellular Automata Theory

    Science.gov (United States)

    Samartsev, A. A.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.; Fominykh, D. S.

    2018-05-01

    The distinctive feature of the proposed fire and heat spreading model in premises is the reduction of the computational complexity due to the use of the theory of cellular automata with probability rules of behavior. The possibilities and prospects of using this model in practice are noted. The proposed model has a simple mechanism of integration with agent-based evacuation models. The joint use of these models could improve floor plans and reduce the time of evacuation from premises during fires.

  1. Actor-critic-based ink drop spread as an intelligent controller

    OpenAIRE

    SAGHA, Hesam; AFRAKOTI, Iman Esmaili Paeen; BAGHERISHOURAKI, Saeed

    2014-01-01

    This paper introduces an innovative adaptive controller based on the actor-critic method. The proposed approach employs the ink drop spread (IDS) method as its main engine. The IDS method is a new trend in soft-computing approaches that is a universal fuzzy modeling technique and has been also used as a supervised controller. Its process is very similar to the processing system of the human brain. The proposed actor-critic method uses an IDS structure as an actor and a 2-dimensional...

  2. Velocity Spread Reduction for Axis-encircling Electron Beam Generated by Single Magnetic Cusp

    Science.gov (United States)

    Jeon, S. G.; Baik, C. W.; Kim, D. H.; Park, G. S.; Sato, N.; Yokoo, K.

    2001-10-01

    Physical characteristics of an annular Pierce-type electron gun are investigated analytically. An annular electron gun is used in conjunction with a non-adiabatic magnetic reversal and an adiabatic compression to produce an axis-encircling electron beam. Velocity spread close to zero is realized with an initial canonical angular momentum spread at the cathode when the beam trajectory does not coincide with the magnetic flux line. Both the analytical calculation and the EGUN code simulation confirm this phenomenon.

  3. Average intensity and spreading of partially coherent model beams propagating in a turbulent biological tissue

    International Nuclear Information System (INIS)

    Wu, Yuqian; Zhang, Yixin; Wang, Qiu; Hu, Zhengda

    2016-01-01

    For Gaussian beams with three different partially coherent models, including Gaussian-Schell model (GSM), Laguerre-Gaussian Schell-model (LGSM) and Bessel-Gaussian Schell-model (BGSM) beams propagating through a biological turbulent tissue, the expression of the spatial coherence radius of a spherical wave propagating in a turbulent biological tissue, and the average intensity and beam spreading for GSM, LGSM and BGSM beams are derived based on the fractal model of power spectrum of refractive-index variations in biological tissue. Effects of partially coherent model and parameters of biological turbulence on such beams are studied in numerical simulations. Our results reveal that the spreading of GSM beams is smaller than LGSM and BGSM beams on the same conditions, and the beam with larger source coherence width has smaller beam spreading than that with smaller coherence width. The results are useful for any applications involved light beam propagation through tissues, especially the cases where the average intensity and spreading properties of the light should be taken into account to evaluate the system performance and investigations in the structures of biological tissue. - Highlights: • Spatial coherence radius of a spherical wave propagating in a turbulent biological tissue is developed. • Expressions of average intensity and beam spreading for GSM, LGSM and BGSM beams in a turbulent biological tissue are derived. • The contrast for the three partially coherent model beams is shown in numerical simulations. • The results are useful for any applications involved light beam propagation through tissues.

  4. Construction and Analysis of a Novel 2-D Optical Orthogonal Codes Based on Modified One-coincidence Sequence

    Science.gov (United States)

    Ji, Jianhua; Wang, Yanfen; Wang, Ke; Xu, Ming; Zhang, Zhipeng; Yang, Shuwen

    2013-09-01

    A new two-dimensional OOC (optical orthogonal codes) named PC/MOCS is constructed, using PC (prime code) for time spreading and MOCS (modified one-coincidence sequence) for wavelength hopping. Compared with PC/PC, the number of wavelengths for PC/MOCS is not limited to a prime number. Compared with PC/OCS, the length of MOCS need not be expanded to the same length of PC. PC/MOCS can be constructed flexibly, and also can use available wavelengths effectively. Theoretical analysis shows that PC/MOCS can reduce the bit error rate (BER) of OCDMA system, and can support more users than PC/PC and PC/OCS.

  5. Development of an advanced code system for fast-reactor transient analysis

    International Nuclear Information System (INIS)

    Konstantin Mikityuk; Sandro Pelloni; Paul Coddington

    2005-01-01

    FAST (Fast-spectrum Advanced Systems for power production and resource management) is a recently approved PSI activity in the area of fast spectrum core and safety analysis with emphasis on generic developments and Generation IV systems. In frames of the FAST project we will study both statics and transients core physics, reactor system behaviour and safety; related international experiments. The main current goal of the project is to develop unique analytical and code capability for core and safety analysis of critical (and sub-critical) fast spectrum systems with an initial emphasis on a gas cooled fast reactors. A structure of the code system is shown on Fig. 1. The main components of the FAST code system are 1) ERANOS code for preparation of basic x-sections and their partial derivatives; 2) PARCS transient nodal-method multi-group neutron diffusion code for simulation of spatial (3D) neutron kinetics in hexagonal and square geometries; 3) TRAC/AAA code for system thermal hydraulics; 4) FRED transient model for fuel thermal-mechanical behaviour; 5) PVM system as an interface between separate parts of the code system. The paper presents a structure of the code system (Fig. 1), organization of interfaces and data exchanges between main parts of the code system, examples of verification and application of separate codes and the system as a whole. (authors)

  6. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Ilow Jacek

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of information packets to construct redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  7. grmonty: A MONTE CARLO CODE FOR RELATIVISTIC RADIATIVE TRANSPORT

    International Nuclear Information System (INIS)

    Dolence, Joshua C.; Gammie, Charles F.; Leung, Po Kin; Moscibrodzka, Monika

    2009-01-01

    We describe a Monte Carlo radiative transport code intended for calculating spectra of hot, optically thin plasmas in full general relativity. The version we describe here is designed to model hot accretion flows in the Kerr metric and therefore incorporates synchrotron emission and absorption, and Compton scattering. The code can be readily generalized, however, to account for other radiative processes and an arbitrary spacetime. We describe a suite of test problems, and demonstrate the expected N -1/2 convergence rate, where N is the number of Monte Carlo samples. Finally, we illustrate the capabilities of the code with a model calculation, a spectrum of the slowly accreting black hole Sgr A* based on data provided by a numerical general relativistic MHD model of the accreting plasma.

  8. NSPEC - A neutron spectrum code for beam-heated fusion plasmas

    International Nuclear Information System (INIS)

    Scheffel, J.

    1983-06-01

    A 3-dimensional computer code is described, which computes neutron spectra due to beam heating of fusion plasmas. Three types of interactions are considered; thermonuclear of plasma-plasma, beam-plasma and beam-beam interactions. Beam deposition is modelled by the NFREYA code. The applied steady state beam distribution as a function of pitch angle and velocity contains the effects of energy diffusion, friction, angular scattering, charge exchange, electric field and source pitch angle distribution. The neutron spectra, generated by Monte-Carlo methods, are computed with respect to given lines of sight. This enables the code to be used for neutron diagnostics. (author)

  9. Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks

    Science.gov (United States)

    Ren, Shengwei; Zhang, Li; Zhang, Shibing

    2016-10-01

    Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.

  10. A perturbative approach to the redshift space power spectrum: beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Burnaby Road, Portsmouth, Hampshire, PO1 3FX (United Kingdom)

    2016-08-01

    We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shown to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.

  11. Sensitivity of Multicarrier Two-Dimensional Spreading Schemes to Synchronization Errors

    Directory of Open Access Journals (Sweden)

    Geneviève Jourdain

    2008-06-01

    Full Text Available This paper presents the impact of synchronization errors on the performance of a downlink multicarrier two-dimensional spreading OFDM-CDMA system. This impact is measured by the degradation of the signal to interference and noise ratio (SINR obtained after despreading and equalization. The contribution of this paper is twofold. First, we use some properties of random matrix and free probability theories to derive a new expression of the SINR. This expression is then independent of the actual value of the spreading codes while still accounting for the orthogonality between codes. This model is validated by means of Monte Carlo simulations. Secondly, the model is exploited to derive the SINR degradation of OFDM-CDMA systems due to synchronization errors which include a timing error, a carrier frequency offset, and a sampling frequency offset. It is also exploited to compare the sensitivities of MC-CDMA and MC-DS-CDMA systems to these errors in a frequency selective channel. This work is carried out for zero-forcing and minimum mean square error equalizers.

  12. Multicarrier Block-Spread CDMA for Broadband Cellular Downlink

    Directory of Open Access Journals (Sweden)

    Leus Geert

    2004-01-01

    Full Text Available Effective suppression of multiuser interference (MUI and mitigation of frequency-selective fading effects within the complexity constraints of the mobile constitute major challenges for broadband cellular downlink transceiver design. Existing wideband direct-sequence (DS code division multiple access (CDMA transceivers suppress MUI statistically by restoring the orthogonality among users at the receiver. However, they call for receive diversity and multichannel equalization to improve the fading effects caused by deep channel fades. Relying on redundant block spreading and linear precoding, we design a so-called multicarrier block-spread- (MCBS-CDMA transceiver that preserves the orthogonality among users and guarantees symbol detection, regardless of the underlying frequency-selective fading channels. These properties allow for deterministic MUI elimination through low-complexity block despreading and enable full diversity gains, irrespective of the system load. Different options to perform equalization and decoding, either jointly or separately, strike the trade-off between performance and complexity. To improve the performance over multi-input multi-output (MIMO multipath fading channels, our MCBS-CDMA transceiver combines well with space-time block-coding (STBC techniques, to exploit both multiantenna and multipath diversity gains, irrespective of the system load. Simulation results demonstrate the superior performance of MCBS-CDMA compared to competing alternatives.

  13. On the automated assessment of nuclear reactor systems code accuracy

    International Nuclear Information System (INIS)

    Kunz, Robert F.; Kasmala, Gerald F.; Mahaffy, John H.; Murray, Christopher J.

    2002-01-01

    An automated code assessment program (ACAP) has been developed to provide quantitative comparisons between nuclear reactor systems (NRS) code results and experimental measurements. The tool provides a suite of metrics for quality of fit to specific data sets, and the means to produce one or more figures of merit (FOM) for a code, based on weighted averages of results from the batch execution of a large number of code-experiment and code-code data comparisons. Accordingly, this tool has the potential to significantly streamline the verification and validation (V and V) processes in NRS code development environments which are characterized by rapidly evolving software, many contributing developers and a large and growing body of validation data. In this paper, a survey of data conditioning and analysis techniques is summarized which focuses on their relevance to NRS code accuracy assessment. A number of methods are considered for their applicability to the automated assessment of the accuracy of NRS code simulations. A variety of data types and computational modeling methods are considered from a spectrum of mathematical and engineering disciplines. The goal of the survey was to identify needs, issues and techniques to be considered in the development of an automated code assessment procedure, to be used in United States Nuclear Regulatory Commission (NRC) advanced thermal-hydraulic T/H code consolidation efforts. The ACAP software was designed based in large measure on the findings of this survey. An overview of this tool is summarized and several NRS data applications are provided. The paper is organized as follows: The motivation for this work is first provided by background discussion that summarizes the relevance of this subject matter to the nuclear reactor industry. Next, the spectrum of NRS data types are classified into categories, in order to provide a basis for assessing individual comparison methods. Then, a summary of the survey is provided, where each

  14. OXA-48 and CTX-M-15 extended-spectrum beta-lactamases in raw milk in Lebanon: epidemic spread of dominant Klebsiella pneumoniae clones.

    Science.gov (United States)

    Diab, Mohamad; Hamze, Monzer; Bonnet, Richard; Saras, Estelle; Madec, Jean-Yves; Haenni, Marisa

    2017-11-01

    Raw milk has recently been reported as a source of extended-spectrum beta-lactamase (ESBL) and carbapenemase genes. We thus investigated the prevalence of ESBL- and carbapenemase-producing Enterobacteriaceae in raw milk in Lebanon in order to assess the risk of transfer of these bacteria to humans. A high prevalence (30.2 %) of CTX-M-15-producing K. pneumoniae was detected in raw bovine milk. Three main K. pneumoniae clones were identified by PFGE and MLST typing. Southern blot experiments revealed that one of these clones carried the blaCTX-M-15 gene chromosomally. Moreover, one OXA-48-producing K. pneumoniae ST530 and seven CTX-M-15-producing Escherichia coli sharing the same ST were also detected. These findings highlight the spread of dominant CTX-M-15-producing K. pneumoniae clones and OXA-48-producing isolates in the food chain. Milk, which is mostly consumed raw in Lebanon, may be a source of human exposure to ESBLs and carbapenemases.

  15. Particle-in-Cell Codes for plasma-based particle acceleration

    CERN Document Server

    Pukhov, Alexander

    2016-01-01

    Basic principles of particle-in-cell (PIC ) codes with the main application for plasma-based acceleration are discussed. The ab initio full electromagnetic relativistic PIC codes provide the most reliable description of plasmas. Their properties are considered in detail. Representing the most fundamental model, the full PIC codes are computationally expensive. The plasma-based acceler- ation is a multi-scale problem with very disparate scales. The smallest scale is the laser or plasma wavelength (from one to hundred microns) and the largest scale is the acceleration distance (from a few centimeters to meters or even kilometers). The Lorentz-boost technique allows to reduce the scale disparity at the costs of complicating the simulations and causing unphysical numerical instabilities in the code. Another possibility is to use the quasi-static approxi- mation where the disparate scales are separated analytically.

  16. Web- and system-code based, interactive, nuclear power plant simulators

    International Nuclear Information System (INIS)

    Kim, K. D.; Jain, P.; Rizwan, U.

    2006-01-01

    Using two different approaches, on-line, web- and system-code based graphical user interfaces have been developed for reactor system analysis. Both are LabVIEW (graphical programming language developed by National Instruments) based systems that allow local users as well as those at remote sites to run, interact and view the results of the system code in a web browser. In the first approach, only the data written by the system code in a tab separated ASCII output file is accessed and displayed graphically. In the second approach, LabVIEW virtual instruments are coupled with the system code as dynamic link libraries (DLL). RELAP5 is used as the system code to demonstrate the capabilities of these approaches. From collaborative projects between teams in geographically remote locations to providing system code experience to distance education students, these tools can be very beneficial in many areas of teaching and R and D. (authors)

  17. Auto Code Generation for Simulink-Based Attitude Determination Control System

    Science.gov (United States)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.

  18. An information spreading model based on online social networks

    Science.gov (United States)

    Wang, Tao; He, Juanjuan; Wang, Xiaoxia

    2018-01-01

    Online social platforms are very popular in recent years. In addition to spreading information, users could review or collect information on online social platforms. According to the information spreading rules of online social network, a new information spreading model, namely IRCSS model, is proposed in this paper. It includes sharing mechanism, reviewing mechanism, collecting mechanism and stifling mechanism. Mean-field equations are derived to describe the dynamics of the IRCSS model. Moreover, the steady states of reviewers, collectors and stiflers and the effects of parameters on the peak values of reviewers, collectors and sharers are analyzed. Finally, numerical simulations are performed on different networks. Results show that collecting mechanism and reviewing mechanism, as well as the connectivity of the network, make information travel wider and faster, and compared to WS network and ER network, the speed of reviewing, sharing and collecting information is fastest on BA network.

  19. ZAKI: a windows-based ko standardization code for in-core INAA

    International Nuclear Information System (INIS)

    Ojo, J.O.; Filby, R.H.

    2002-01-01

    A new computer code ZAKI, for k o -based INAA standardization, written in Visual Basic for the WINDOWS environment is described. The parameter α measuring the deviation of the epithermal neutron spectrum shape from the ideal 1/E shape, and the thermal-to-epithermal flux ratio f, are monitored at each irradiation position for each irradiation using the ''triple bare monitor with k o '' technique. Stability of the irradiation position with respect to α and f is therefore assumed only for the duration of the irradiation. This now makes it possible to use k o standardization even for in-core reactor irradiation channels without an a priori knowledge of α and f values as required by existing commercial software. ZAKI is considerably versatile and contains features which allow for use of several detectors at different counting geometries, direct inputting of peak search output from GeniePc, and automatic nuclide identification of all gamma lines using an in-built library. Sample results for two certified reference materials are presented

  20. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Science.gov (United States)

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  1. Default Spread dan Term Spread sebagai Variabel Proxy Siklus Bisnis pada Model Fama-French

    Directory of Open Access Journals (Sweden)

    Edwin Hendra

    2015-08-01

    Full Text Available This research aims to apply the Fama-French models and test the effect of alternative variable of bond yield spread, default spread (RBBB – RAAA and RAAA – RF, and the term spread (RSUN10-RSUN1, as proxy variables of the business cycle, in IDX stock data during 2005-2010. Four types of asset pricing models tested are Sharpe-Lintner CAPM, Fama-French models, Hwang et al.model, and hybrid model. The results showed that the size effect and value effect has an impact on excess stock returns. Slopes of market beta, SMB, and HML are more sensitive to stock big size and high B / M. Default spreads and term spreads in Hwang et al. model can explain the value effect, and weakly explain the size effect, meanwhile the power of explanation disappeared on Hybrid models. Based on the assessment adjusted R2 and the frequency of rejection of non-zero alpha, is found that the hybrid model is the most suitable model.  

  2. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  3. Electroluminescence enhancement for near-ultraviolet light emitting diodes with graphene/AZO-based current spreading layers

    DEFF Research Database (Denmark)

    Lin, Li; Ou, Yiyu; Zhu, Xiaolong

    LEDs) have attracted significant research interest due to their intensive applications in various areas where indium tin oxide (ITO) is one of the most widely employed transparent conductive materials for NUV LEDs. Compared to ITO, indium-free aluminum-doped zinc oxide (AZO) has similar electrical......Near-ultraviolet light emitting diodes with different aluminum-doped zinc oxide-based current spreading layers were fabricated and electroluminescence (EL) was compared. A 170% EL enhancement was achieved by using a graphene-based interlayer. GaN-based near-ultraviolet light emitting diodes (NUV...... with a new type of current spreading layer (CSL) which combines AZO and a single-layer graphene (SLG) as an effective transparent CSL [1]. In the present work, LEDs with solo AZO CSL in Fig.1(a) and SLG/Ni/AZO-based CSL in Fig.1(b) were both fabricated for EL comparison. Standard mesa fabrication including...

  4. A study of the spreading scheme for viral marketing based on a complex network model

    Science.gov (United States)

    Yang, Jianmei; Yao, Canzhong; Ma, Weicheng; Chen, Guanrong

    2010-02-01

    Buzzword-based viral marketing, known also as digital word-of-mouth marketing, is a marketing mode attached to some carriers on the Internet, which can rapidly copy marketing information at a low cost. Viral marketing actually uses a pre-existing social network where, however, the scale of the pre-existing network is believed to be so large and so random, so that its theoretical analysis is intractable and unmanageable. There are very few reports in the literature on how to design a spreading scheme for viral marketing on real social networks according to the traditional marketing theory or the relatively new network marketing theory. Complex network theory provides a new model for the study of large-scale complex systems, using the latest developments of graph theory and computing techniques. From this perspective, the present paper extends the complex network theory and modeling into the research of general viral marketing and develops a specific spreading scheme for viral marking and an approach to design the scheme based on a real complex network on the QQ instant messaging system. This approach is shown to be rather universal and can be further extended to the design of various spreading schemes for viral marketing based on different instant messaging systems.

  5. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    Science.gov (United States)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  6. Modification of the fast fourier transform-based method by signal mirroring for accuracy quantification of thermal-hydraulic system code

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Tae Wook; Jeong, Jae Jun [School of Mechanical Engineering, Pusan National University, Busan (Korea, Republic of); Choi, Ki Yong [Korea Atomic Energy Research Institute (KAERI), Daejeon (Korea, Republic of)

    2017-08-15

    A thermal–hydraulic system code is an essential tool for the design and safety analysis of a nuclear power plant, and its accuracy quantification is very important for the code assessment and applications. The fast Fourier transform-based method (FFTBM) by signal mirroring (FFTBM-SM) has been used to quantify the accuracy of a system code by using a comparison of the experimental data and the calculated results. The method is an improved version of the FFTBM, and it is known that the FFTBM-SM judges the code accuracy in a more consistent and unbiased way. However, in some applications, unrealistic results have been obtained. In this study, it was found that accuracy quantification by FFTBM-SM is dependent on the frequency spectrum of the fast Fourier transform of experimental and error signals. The primary objective of this study is to reduce the frequency dependency of FFTBM-SM evaluation. For this, it was proposed to reduce the cut off frequency, which was introduced to cut off spurious contributions, in FFTBM-SM. A method to determine an appropriate cut off frequency was also proposed. The FFTBM-SM with the modified cut off frequency showed a significant improvement of the accuracy quantification.

  7. Modification of the fast fourier transform-based method by signal mirroring for accuracy quantification of thermal-hydraulic system code

    International Nuclear Information System (INIS)

    Ha, Tae Wook; Jeong, Jae Jun; Choi, Ki Yong

    2017-01-01

    A thermal–hydraulic system code is an essential tool for the design and safety analysis of a nuclear power plant, and its accuracy quantification is very important for the code assessment and applications. The fast Fourier transform-based method (FFTBM) by signal mirroring (FFTBM-SM) has been used to quantify the accuracy of a system code by using a comparison of the experimental data and the calculated results. The method is an improved version of the FFTBM, and it is known that the FFTBM-SM judges the code accuracy in a more consistent and unbiased way. However, in some applications, unrealistic results have been obtained. In this study, it was found that accuracy quantification by FFTBM-SM is dependent on the frequency spectrum of the fast Fourier transform of experimental and error signals. The primary objective of this study is to reduce the frequency dependency of FFTBM-SM evaluation. For this, it was proposed to reduce the cut off frequency, which was introduced to cut off spurious contributions, in FFTBM-SM. A method to determine an appropriate cut off frequency was also proposed. The FFTBM-SM with the modified cut off frequency showed a significant improvement of the accuracy quantification

  8. Central Decoding for Multiple Description Codes based on Domain Partitioning

    Directory of Open Access Journals (Sweden)

    M. Spiertz

    2006-01-01

    Full Text Available Multiple Description Codes (MDC can be used to trade redundancy against packet loss resistance for transmitting data over lossy diversity networks. In this work we focus on MD transform coding based on domain partitioning. Compared to Vaishampayan’s quantizer based MDC, domain based MD coding is a simple approach for generating different descriptions, by using different quantizers for each description. Commonly, only the highest rate quantizer is used for reconstruction. In this paper we investigate the benefit of using the lower rate quantizers to enhance the reconstruction quality at decoder side. The comparison is done on artificial source data and on image data. 

  9. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Jacek Ilow

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of k information packets to construct r redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of k information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of n=k+r received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  10. Corium spreading: hydrodynamics, rheology and solidification of a high-temperature oxide melt

    International Nuclear Information System (INIS)

    Journeau, Ch.

    2006-06-01

    In the hypothesis of a nuclear reactor severe accident, the core could melt and form a high- temperature (2000-3000 K) mixture called corium. In the hypothesis of vessel rupture, this corium would spread in the reactor pit and adjacent rooms as occurred in Chernobyl or in a dedicated core-catcher s in the new European Pressurized reactor, EPR. This thesis is dedicated to the experimental study of corium spreading, especially with the prototypic corium material experiments performed in the VULCANO facility at CEA Cadarache. The first step in analyzing these tests consists in interpreting the material analyses, with the help of thermodynamic modelling of corium solidification. Knowing for each temperature the phase repartition and composition, physical properties can be estimated. Spreading termination is controlled by corium rheological properties in the solidification range, which leads to studying them in detail. The hydrodynamical, rheological and solidification aspects of corium spreading are taken into account in models and computer codes which have been validated against these tests and enable the assessment of the EPR spreading core-catcher concept. (author)

  11. Influence of Code Size Variation on the Performance of 2D Hybrid ZCC/MD in OCDMA System

    Directory of Open Access Journals (Sweden)

    Matem Rima.

    2018-01-01

    Full Text Available Several two dimensional OCDMA have been developed in order to overcome many problems in optical network, enhancing cardinality, suppress Multiple Access Interference (MAI and mitigate Phase Induced Intensity Noise (PIIN. This paper propose a new 2D hybrid ZCC/MD code combining between 1D ZCC spectral encoding where M is its code length and 1D MD spatial spreading where N is its code length. The spatial spreading (N code length offers a good cardinality so it represents the main effect to enhance the performance of the system compared to the spectral (M code length according to the numerical results.

  12. Spreading of suppository bases assessed with histological and scintigraphic techniques

    International Nuclear Information System (INIS)

    Tupper, C.H.; Copping, N.; Thomas, N.W.; Wilson, C.G.

    1982-01-01

    Suppositories of PEG 15400 and PEG 600, Myrj 52 and Brij 35, were administered rectally to fasted male rats. 30 and 60 mins after liquefaction time samples of rectal mucosa were taken from treated and untreated rats. The reduction in rectal cell volume and density in treated rats was noted. Similar suppositories, containing anion exchange resin and labelled with technetium 99, were administered to other rats. Serial scintiscanning was carried out using a gamma camera linked to a computer. Spreading of the suppository bases was assessed histologically and by imaging. (U.K.)

  13. Spectrum Sharing Based on a Bertrand Game in Cognitive Radio Sensor Networks

    Directory of Open Access Journals (Sweden)

    Biqing Zeng

    2017-01-01

    Full Text Available In the study of power control and allocation based on pricing, the utility of secondary users is usually studied from the perspective of the signal to noise ratio. The study of secondary user utility from the perspective of communication demand can not only promote the secondary users to meet the maximum communication needs, but also to maximize the utilization of spectrum resources, however, research in this area is lacking, so from the viewpoint of meeting the demand of network communication, this paper designs a two stage model to solve spectrum leasing and allocation problem in cognitive radio sensor networks (CRSNs. In the first stage, the secondary base station collects the secondary network communication requirements, and rents spectrum resources from several primary base stations using the Bertrand game to model the transaction behavior of the primary base station and secondary base station. The second stage, the subcarriers and power allocation problem of secondary base stations is defined as a nonlinear programming problem to be solved based on Nash bargaining. The simulation results show that the proposed model can satisfy the communication requirements of each user in a fair and efficient way compared to other spectrum sharing schemes.

  14. A robust power spectrum split cancellation-based spectrum sensing method for cognitive radio systems

    International Nuclear Information System (INIS)

    Qi Pei-Han; Li Zan; Si Jiang-Bo; Gao Rui

    2014-01-01

    Spectrum sensing is an essential component to realize the cognitive radio, and the requirement for real-time spectrum sensing in the case of lacking prior information, fading channel, and noise uncertainty, indeed poses a major challenge to the classical spectrum sensing algorithms. Based on the stochastic properties of scalar transformation of power spectral density (PSD), a novel spectrum sensing algorithm, referred to as the power spectral density split cancellation method (PSC), is proposed in this paper. The PSC makes use of a scalar value as a test statistic, which is the ratio of each subband power to the full band power. Besides, by exploiting the asymptotic normality and independence of Fourier transform, the distribution of the ratio and the mathematical expressions for the probabilities of false alarm and detection in different channel models are derived. Further, the exact closed-form expression of decision threshold is calculated in accordance with Neyman—Pearson criterion. Analytical and simulation results show that the PSC is invulnerable to noise uncertainty, and can achive excellent detection performance without prior knowledge in additive white Gaussian noise and flat slow fading channels. In addition, the PSC benefits from a low computational cost, which can be completed in microseconds. (interdisciplinary physics and related areas of science and technology)

  15. A robust power spectrum split cancellation-based spectrum sensing method for cognitive radio systems

    Science.gov (United States)

    Qi, Pei-Han; Li, Zan; Si, Jiang-Bo; Gao, Rui

    2014-12-01

    Spectrum sensing is an essential component to realize the cognitive radio, and the requirement for real-time spectrum sensing in the case of lacking prior information, fading channel, and noise uncertainty, indeed poses a major challenge to the classical spectrum sensing algorithms. Based on the stochastic properties of scalar transformation of power spectral density (PSD), a novel spectrum sensing algorithm, referred to as the power spectral density split cancellation method (PSC), is proposed in this paper. The PSC makes use of a scalar value as a test statistic, which is the ratio of each subband power to the full band power. Besides, by exploiting the asymptotic normality and independence of Fourier transform, the distribution of the ratio and the mathematical expressions for the probabilities of false alarm and detection in different channel models are derived. Further, the exact closed-form expression of decision threshold is calculated in accordance with Neyman—Pearson criterion. Analytical and simulation results show that the PSC is invulnerable to noise uncertainty, and can achive excellent detection performance without prior knowledge in additive white Gaussian noise and flat slow fading channels. In addition, the PSC benefits from a low computational cost, which can be completed in microseconds.

  16. Simulation the spatial resolution of an X-ray imager based on zinc oxide nanowires in anodic aluminium oxide membrane by using MCNP and OPTICS Codes

    Science.gov (United States)

    Samarin, S. N.; Saramad, S.

    2018-05-01

    The spatial resolution of a detector is a very important parameter for x-ray imaging. A bulk scintillation detector because of spreading of light inside the scintillator does't have a good spatial resolution. The nanowire scintillators because of their wave guiding behavior can prevent the spreading of light and can improve the spatial resolution of traditional scintillation detectors. The zinc oxide (ZnO) scintillator nanowire, with its simple construction by electrochemical deposition in regular hexagonal structure of Aluminum oxide membrane has many advantages. The three dimensional absorption of X-ray energy in ZnO scintillator is simulated by a Monte Carlo transport code (MCNP). The transport, attenuation and scattering of the generated photons are simulated by a general-purpose scintillator light response simulation code (OPTICS). The results are compared with a previous publication which used a simulation code of the passage of particles through matter (Geant4). The results verify that this scintillator nanowire structure has a spatial resolution less than one micrometer.

  17. Hybrid Video Coding Based on Bidimensional Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Lorenzo Granai

    2004-12-01

    Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.

  18. Nonlinear theory of the collisional Rayleigh-Taylor instability in equatorial spread F

    International Nuclear Information System (INIS)

    Chaturvedi, P.K.; Ossakow, S.L.

    1977-01-01

    The nonlinear behavior of the collisional Rayleigh-Taylor instability is studied in equatorial Spread F by including a dominant two-dimensional nonlinearity. It is found that on account of this nonlinearity the instability saturates by generating damped higher spatial harmonics. The saturated power spectrum for the density fluctuations is discussed. A comparison between experimental observations and theory is presented

  19. Performance of code 'FAIR' in IAEA CRP on FUMEX

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.; Kakodkar, A.

    1996-01-01

    A modern fuel performance analysis code FAIR has been developed for analysing high burnup fuel pins of water/heavy water cooled reactors. The code employs finite element method for modelling thermo mechanical behaviour of fuel pins and mechanistic models for modelling various physical and chemical phenomena affecting the behaviour of nuclear reactor fuel pins. High burnup affects such as pellet thermal conductivity degradation, enhanced fission gas release and radial flux redistribution are incorporated in the code FAIR. The code FAIR is capable of performing statistical analysis of fuel pins using Monte Carlo technique. The code is implemented on BARC parallel processing system ANUPAM. The code has recently participated in an International Atomic Energy Agency (IAEA) coordinated research program (CRP) on fuel modelling at extended burnups (FUMEX). Nineteen agencies from different countries participated in this exercise. In this CRP, spread over a period of three years, a number of high burnup fuel pins irradiated at Halden reactor are analysed. The first phase of the CRP is a blind code comparison exercise, where the computed results are compared with experimental results. The second phase consists of modifications to the code based on the experimental results of first phase and statistical analysis of fuel pins. The performance of the code FAIR in this CRP has been very good. The present report highlights the main features of code FAIR and its performance in the IAEA CRP on FUMEX. 14 refs., 5 tabs., ills

  20. Protograph-Based Raptor-Like Codes

    Science.gov (United States)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  1. Competing spreading processes on multiplex networks: awareness and epidemics.

    Science.gov (United States)

    Granell, Clara; Gómez, Sergio; Arenas, Alex

    2014-07-01

    Epidemiclike spreading processes on top of multilayered interconnected complex networks reveal a rich phase diagram of intertwined competition effects. A recent study by the authors [C. Granell et al., Phys. Rev. Lett. 111, 128701 (2013).] presented an analysis of the interrelation between two processes accounting for the spreading of an epidemic, and the spreading of information awareness to prevent infection, on top of multiplex networks. The results in the case in which awareness implies total immunization to the disease revealed the existence of a metacritical point at which the critical onset of the epidemics starts, depending on completion of the awareness process. Here we present a full analysis of these critical properties in the more general scenario where the awareness spreading does not imply total immunization, and where infection does not imply immediate awareness of it. We find the critical relation between the two competing processes for a wide spectrum of parameters representing the interaction between them. We also analyze the consequences of a massive broadcast of awareness (mass media) on the final outcome of the epidemic incidence. Importantly enough, the mass media make the metacritical point disappear. The results reveal that the main finding, i.e., existence of a metacritical point, is rooted in the competition principle and holds for a large set of scenarios.

  2. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  3. Distance Dependent Model for the Delay Power Spectrum of In-room Radio Channels

    DEFF Research Database (Denmark)

    Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri

    2013-01-01

    A model based on experimental observations of the delay power spectrum in closed rooms is proposed. The model includes the distance between the transmitter and the receiver as a parameter which makes it suitable for range based radio localization. The experimental observations motivate the proposed...... model of the delay power spectrum with a primary (early) component and a reverberant component (tail). The primary component is modeled as a Dirac delta function weighted according to an inverse distance power law (d-n). The reverberant component is an exponentially decaying function with onset equal...... to the propagation time between transmitter and receiver. Its power decays exponentially with distance. The proposed model allows for the prediction of e.g. the path loss, mean delay, root mean squared (rms) delay spread, and kurtosis versus the distance. The model predictions are validated by measurements...

  4. Effect of AC electric fields on flame spread over electrical wire

    KAUST Repository

    Kim, Minkuk

    2011-01-01

    The effect of electric fields on the characteristics of flame spread over insulated electrical wire has been investigated experimentally by varying AC voltage and frequency applied to the wire in the normal gravity condition. The polyethylene (PE) insulated electrical wire was placed horizontally on electrically non-conducting posts and one end of the wire was connected to the high voltage terminal. Thus, the electrical system is the single electrode configuration. The wire was ignited at one end and the flame spread rate along the wire has been measured from the images using a video camera. Two distinct regimes existed depending on the applied AC frequency. In the low frequency regime, the flame spread rate decreased with the frequency and voltage. While in the high frequency regime, it decreased initially with voltage and then increased. At high frequency, the spread rate was even over that without applying electric fields. This result implies that fire safety codes developed without considering the effect of electric fields may require modifications. © 2010 Published by Elsevier Inc. on behalf of The Combustion Institute. All rights reserved.

  5. PERMUTATION-BASED POLYMORPHIC STEGO-WATERMARKS FOR PROGRAM CODES

    Directory of Open Access Journals (Sweden)

    Denys Samoilenko

    2016-06-01

    Full Text Available Purpose: One of the most actual trends in program code protection is code marking. The problem consists in creation of some digital “watermarks” which allow distinguishing different copies of the same program codes. Such marks could be useful for authority protection, for code copies numbering, for program propagation monitoring, for information security proposes in client-server communication processes. Methods: We used the methods of digital steganography adopted for program codes as text objects. The same-shape symbols method was transformed to same-semantic element method due to codes features which makes them different from ordinary texts. We use dynamic principle of marks forming making codes similar to be polymorphic. Results: We examined the combinatorial capacity of permutations possible in program codes. As a result it was shown that the set of 5-7 polymorphic variables is suitable for the most modern network applications. Marks creation and restoration algorithms where proposed and discussed. The main algorithm is based on full and partial permutations in variables names and its declaration order. Algorithm for partial permutation enumeration was optimized for calculation complexity. PHP code fragments which realize the algorithms were listed. Discussion: Methodic proposed in the work allows distinguishing of each client-server connection. In a case if a clone of some network resource was found the methodic could give information about included marks and thereby data on IP, date and time, authentication information of client copied the resource. Usage of polymorphic stego-watermarks should improve information security indexes in network communications.

  6. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  7. Up-date of the BCG code library

    International Nuclear Information System (INIS)

    Caldeira, A.D.; Garcia, R.D.M.

    1990-01-01

    Procedures for generating an up-date material library for the BCG code were established. A new library was generated by processing ENDF/B-IV data with the 89-1 version of the LINEAR, RECENT and SIGMA1 programs. The effect of library change in the neutron spectrum and effective multiplication factor of a fast reactor cell was analized. During the course of this study, an error was detected in the BCG code. Although localized in a narrow energy range, the discrepancies in neutron spectrum caused by the error were large enough to yield a difference of about 1% in the effective multiplication factor of the test cell. (author)

  8. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  9. Simulation of melt spreading in consideration of phase transitions

    Energy Technology Data Exchange (ETDEWEB)

    Spengler, C. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Germany)

    2002-07-01

    The analysis of melt spreading and relocation phenomena in the containment of LWR power plants in case of hypothetical severe accidents leading to core melting is an important issue for reactor safety investigations. For the simulation of melt spreading the code LAVA has been developed on the basis of a method from the related subject of volcanology by adding more detailed models for heat transfer phenomena and flow rheology. The development is supported by basic analysis of the spreading of gravity currents as well as experimental investigations of the rheology of solidifying melts. These exhibit strong non-Newtonian effects in case of a high content of solids in the freezing melt. The basic model assumption in LAVA is the ideal Bingham plastic approach to the non-Newtonian, shear-thinning characteristic of solidifying melts. For the recalculation of melt spreading experiments, the temperature-dependent material properties for solidifying melt mixtures have been calculated using correlations from the literature. With the parameters and correlations for the rheological material properties approached by results from literature, it was possible to recalculate successfully recent spreading experiments with simulant materials and prototypic reactor core materials. An application to the behaviour of core melt in the reactor cavity assumed a borderline case for the issue of spreading. This limit is represented by melt conditions (large solid fraction, low volume flux), under which the melt is hardly spreadable. Due to the persistent volume flux the reactor cavity is completely, but inhomogeneously filled with melt. The degree of inhomogeneity is rather small, so it is concluded, that for the long-term coolability of a melt pool in narrow cavities the spreading of melt will probably have only negligible influence. (orig.)

  10. Edge-preserving Intra Depth Coding based on Context-coding and H.264/AVC

    DEFF Research Database (Denmark)

    Zamarin, Marco; Salmistraro, Matteo; Forchhammer, Søren

    2013-01-01

    Depth map coding plays a crucial role in 3D Video communication systems based on the “Multi-view Video plus Depth” representation as view synthesis performance is strongly affected by the accuracy of depth information, especially at edges in the depth map image. In this paper an efficient algorithm...... for edge-preserving intra depth compression based on H.264/AVC is presented. The proposed method introduces a new Intra mode specifically targeted to depth macroblocks with arbitrarily shaped edges, which are typically not efficiently represented by DCT. Edge macroblocks are partitioned into two regions...... each approximated by a flat surface. Edge information is encoded by means of contextcoding with an adaptive template. As a novel element, the proposed method allows exploiting the edge structure of previously encoded edge macroblocks during the context-coding step to further increase compression...

  11. A Danish population-based twin study on autism spectrum disorders

    DEFF Research Database (Denmark)

    Nordenbaek, Claudia; Jorgensen, Meta; Kyvik, Kirsten Ohm

    2014-01-01

    Genetic epidemiological studies of Autism Spectrum Disorders (ASDs) based on twin pairs ascertained from the population and thoroughly assessed to obtain a high degree of diagnostic validity are few. All twin pairs aged 3-14 years in the nationwide Danish Twin Registry were approached. A three......-step procedure was used. Five items from the "Child Behaviour Checklist" (CBCL) were used in the first screening phase, while screening in the second phase included the "Social and Communication Questionnaire" and the "Autism Spectrum Screening Questionnaire". The final clinical assessment was based on "gold...

  12. An accurate evaluation of the performance of asynchronous DS-CDMA systems with zero-correlation-zone coding in Rayleigh fading

    Science.gov (United States)

    Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.

    2010-04-01

    An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.

  13. Experimental data bases useful for quantification of model uncertainties in best estimate codes

    International Nuclear Information System (INIS)

    Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.

    1988-01-01

    A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs

  14. Variable Rate, Adaptive Transform Tree Coding Of Images

    Science.gov (United States)

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  15. Hamming Code Based Watermarking Scheme for 3D Model Verification

    Directory of Open Access Journals (Sweden)

    Jen-Tse Wang

    2014-01-01

    Full Text Available Due to the explosive growth of the Internet and maturing of 3D hardware techniques, protecting 3D objects becomes a more and more important issue. In this paper, a public hamming code based fragile watermarking technique is proposed for 3D objects verification. An adaptive watermark is generated from each cover model by using the hamming code technique. A simple least significant bit (LSB substitution technique is employed for watermark embedding. In the extraction stage, the hamming code based watermark can be verified by using the hamming code checking without embedding any verification information. Experimental results shows that 100% vertices of the cover model can be watermarked, extracted, and verified. It also shows that the proposed method can improve security and achieve low distortion of stego object.

  16. On the potential of zero-tail DFT-spread-OFDM in 5G networks

    DEFF Research Database (Denmark)

    Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão; Sørensen, Troels Bundgaard

    2014-01-01

    Zero-tail Discrete Fourier Transform -spread OFDM (ZT DFT-s-OFDM) modulation allows to dynamically cope with the delay spread of the multipath channel, thus avoiding the limitations of hard-coded Cyclic Prefix (CP). In this paper, we discuss the potential of ZT DFT-s-OFDM modulation for the envis......, possibility of adopting unified radio numerology among different cells, reduced latency and support of agile link direction switching. The robustness of ZT DFT-s-OFDM towards non-idealities such as phase noise and non-linear power amplifier is also discussed....

  17. UNSPEC: revisited (semaphore code)

    International Nuclear Information System (INIS)

    Neifert, R.D.

    1981-01-01

    The UNSPEC code is used to solve the problem of unfolding an observed x-ray spectrum given the response matrix of the measuring system and the measured signal values. UNSPEC uses an iterative technique to solve the unfold problem. Due to experimental errors in the measured signal values and/or computer round-off errors, discontinuities and oscillatory behavior may occur in the iterated spectrum. These can be suppressed by smoothing the results after each iteration. Input/output options and control cards are explained; sample input and output are provided

  18. ZAKI a windows-based k sub o standardization code for in-core INAA

    CERN Document Server

    Ojo, J O

    2002-01-01

    A new computer code ZAKI, for k sub o -based INAA standardization, written in Visual Basic for the WINDOWS environment is described. The parameter alpha measuring the deviation of the epithermal neutron spectrum shape from the ideal 1/E shape, and the thermal-to-epithermal flux ratio f, are monitored at each irradiation position for each irradiation using the ''triple bare monitor with k sub o '' technique. Stability of the irradiation position with respect to alpha and f is therefore assumed only for the duration of the irradiation. This now makes it possible to use k sub o standardization even for in-core reactor irradiation channels without an a priori knowledge of alpha and f values as required by existing commercial software. ZAKI is considerably versatile and contains features which allow for use of several detectors at different counting geometries, direct inputting of peak search output from GeniePc, and automatic nuclide identification of all gamma lines using an in-built library. Sample results for ...

  19. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  20. Rey: a computer code for the determination of the radionuclides activities from the gamma-ray spectrum data

    International Nuclear Information System (INIS)

    Palomares, J.; Perez, A.; Travesi, A.

    1978-01-01

    The Fortran IV computer Code, REY (REsolution and Identification), has been developed for the automatic resolution of the gamma-ray spectra from high resolution Ge-Li detectors. The Code searches the full energy peaks in the spectra background as the base line under the peak and calculates the energy of the statistically significant peaks. Also the Code assigns each peak to the most probable isotope and makes a selection of all the possible radioisotopes of the spectra, according the relative intensities of all the peaks in the whole spectra. Finally, it obtains the activities, in microcuries of each isotope, according the geometry used in the measurement. Although the Code is a general purpose one, their actual library of nuclear data is adapted for the analysis of liquid effluents from nuclear power plants. A computer with a 16 core memory and a hard disk are sufficient for this code.(author)

  1. Fast neutron analysis code SAD1

    International Nuclear Information System (INIS)

    Jung, M.; Ott, C.

    1985-01-01

    A listing and an example of outputs of the M.C. code SAD1 are given here. This code has been used many times to predict responses of fast neutrons in hydrogenic materials (in our case emulsions or plastics) towards the elastic n, p scattering. It can be easily extended to other kinds of such materials and to any kind of incident fast neutron spectrum

  2. Improving the physical layer security of wireless communication networks using spread spectrum coding and artificial noise approach

    CSIR Research Space (South Africa)

    Adedeji, K

    2016-09-01

    Full Text Available at the application layer to protect the messages against eavesdropping. However, the evolution of strong deciphering mechanisms has made conventional cryptography-based security techniques ineffective against attacks from an intruder. Figure 1: Layer protocol... communication networks with passive and active eavesdropper,” IEEE Globecom; Wireless Communication System, pp. 4868-4873, 2012. [9] Y. Zou, X. Wang and W. Shen, “Optimal relay selection for physical layer security in cooperative wireless networks,” IEEE...

  3. [Restoration filtering based on projection power spectrum for single-photon emission computed tomography].

    Science.gov (United States)

    Kubo, N

    1995-04-01

    To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical "least squares filter" theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the "Butterworth" filtering method (cut-off frequency of 0.15 cycles/pixel), and "Wiener" filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99mTc filled cylinder, were used. NMSE of the "Butterworth" filter, "Wiener" filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images.

  4. Restoration filtering based on projection power spectrum for single-photon emission computed tomography

    International Nuclear Information System (INIS)

    Kubo, Naoki

    1995-01-01

    To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical 'least squares filter' theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the 'Butterworth' filtering method (cut-off frequency of 0.15 cycles/pixel), and 'Wiener' filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99m Tc filled cylinder, were used. NMSE of the 'Butterworth' filter, 'Wiener' filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images. (author)

  5. SOLAR-ISS: A new reference spectrum based on SOLAR/SOLSPEC observations

    Science.gov (United States)

    Meftah, M.; Damé, L.; Bolsée, D.; Hauchecorne, A.; Pereira, N.; Sluse, D.; Cessateur, G.; Irbah, A.; Bureau, J.; Weber, M.; Bramstedt, K.; Hilbig, T.; Thiéblemont, R.; Marchand, M.; Lefèvre, F.; Sarkissian, A.; Bekki, S.

    2018-03-01

    Context. Since April 5, 2008 and up to February 15, 2017, the SOLar SPECtrometer (SOLSPEC) instrument of the SOLAR payload on board the International Space Station (ISS) has performed accurate measurements of solar spectral irradiance (SSI) from the middle ultraviolet to the infrared (165 to 3088 nm). These measurements are of primary importance for a better understanding of solar physics and the impact of solar variability on climate. In particular, a new reference solar spectrum (SOLAR-ISS) is established in April 2008 during the solar minima of cycles 23-24 thanks to revised engineering corrections, improved calibrations, and advanced procedures to account for thermal and aging corrections of the SOLAR/SOLSPEC instrument. Aims: The main objective of this article is to present a new high-resolution solar spectrum with a mean absolute uncertainty of 1.26% at 1σ from 165 to 3000 nm. This solar spectrum is based on solar observations of the SOLAR/SOLSPEC space-based instrument. Methods: The SOLAR/SOLSPEC instrument consists of three separate double monochromators that use concave holographic gratings to cover the middle ultraviolet (UV), visible (VIS), and infrared (IR) domains. Our best ultraviolet, visible, and infrared spectra are merged into a single absolute solar spectrum covering the 165-3000 nm domain. The resulting solar spectrum has a spectral resolution varying between 0.6 and 9.5 nm in the 165-3000 nm wavelength range. We build a new solar reference spectrum (SOLAR-ISS) by constraining existing high-resolution spectra to SOLAR/SOLSPEC observed spectrum. For that purpose, we account for the difference of resolution between the two spectra using the SOLAR/SOLSPEC instrumental slit functions. Results: Using SOLAR/SOLSPEC data, a new solar spectrum covering the 165-3000 nm wavelength range is built and is representative of the 2008 solar minimum. It has a resolution better than 0.1 nm below 1000 nm and 1 nm in the 1000-3000 nm wavelength range. The new

  6. Play-Based Interventions for Children and Adolescents with Autism Spectrum Disorders

    Science.gov (United States)

    Gallo-Lopez, Loretta, Ed.; Rubin, Lawrence C., Ed.

    2012-01-01

    "Play-Based Interventions for Children and Adolescents with Autism Spectrum Disorders" explores the most recognized, researched, and practical methods for using play therapy with the increasing number of children diagnosed with Autism Spectrum Disorders (ASDs), and shows clinicians how to integrate these methods into their practices. Using a…

  7. Construction of Quasi-Cyclic LDPC Codes Based on Fundamental Theorem of Arithmetic

    Directory of Open Access Journals (Sweden)

    Hai Zhu

    2018-01-01

    Full Text Available Quasi-cyclic (QC LDPC codes play an important role in 5G communications and have been chosen as the standard codes for 5G enhanced mobile broadband (eMBB data channel. In this paper, we study the construction of QC LDPC codes based on an arbitrary given expansion factor (or lifting degree. First, we analyze the cycle structure of QC LDPC codes and give the necessary and sufficient condition for the existence of short cycles. Based on the fundamental theorem of arithmetic in number theory, we divide the integer factorization into three cases and present three classes of QC LDPC codes accordingly. Furthermore, a general construction method of QC LDPC codes with girth of at least 6 is proposed. Numerical results show that the constructed QC LDPC codes perform well over the AWGN channel when decoded with the iterative algorithms.

  8. Individual stock-option prices and credit spreads

    NARCIS (Netherlands)

    Cremers, M.; Driessen, J.; Maenhout, P.; Weinbaum, D.

    2008-01-01

    This paper introduces measures of volatility and jump risk that are based on individual stock options to explain credit spreads on corporate bonds. Implied volatilities of individual options are shown to contain useful information for credit spreads and improve on historical volatilities when

  9. Ex-vessel corium spreading: results from the VULCANO spreading tests

    Energy Technology Data Exchange (ETDEWEB)

    Journeau, Christophe E-mail: christophe.journeau@cea.fr; Boccaccio, Eric E-mail: eric.boccaccio@cea.fr; Brayer, Claude; Cognet, Gerard E-mail: gerard.cognet@cea.fr; Haquet, Jean-Francois E-mail: haquet@eloise.cad.cea.fr; Jegou, Claude E-mail: claude.jegou@cea.fr; Piluso, Pascal E-mail: pascal.piluso@cea.fr; Monerris, Jose E-mail: jose.monerris@cea.fr

    2003-07-01

    function of the nature of the atmosphere, of the phases (FeO{sub x}, UO{sub y}, ...) and of the substrate. These tests with prototypic material have improved our knowledge on corium and contributed to validate spreading models and codes which are used for the assessment of corium mastering concepts.

  10. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  11. Nonlinearity effect of electro-optical modulator response in double spread CDMA radio-over-fiber transmissions

    Science.gov (United States)

    Huang, Jen-Fa; Yen, Chih-Ta; Li, Tzung-Yen

    2008-07-01

    This study presents a double-spread code-division multiple-access (CDMA) scheme for radio-over-fiber (RoF) transmissions. The network coder/decoders (codecs) are implemented using arrayed-waveguide grating (AWG) routers coded with maximal-length sequence ( M-sequence) codes. The effects of phase-induced intensity noise (PIIN) and multiple-access interference (MAI) on the system performance are evaluated numerically for different values of the optical modulation index (OMI) during the nonlinear electro-optical modulator (EOM) response. At low OMI optical device noise is dominant, but at high OMI nonlinear effect becomes significant. Numerical result shows that the system performance is highly sensitive to the OMI. Therefore, specifying an appropriate value of the OMI is essential in optimizing the system performance. The influence of the degree of polarization (DOP) in the system is also discussed. By employing the scrambler in front of the balanced photo-detector, the system performance can be enhanced. The high-performance, low-cost characteristics of the double-spread CDMA render the scheme an ideal solution for radio-CDMA wireless system cascaded with optical CDMA network.

  12. Cell Based GIS as Cellular Automata for Disaster Spreading Predictions and Required Data Systems

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-03-01

    Full Text Available A method for prediction and simulation based on the Cell Based Geographic Information System(GIS as Cellular Automata (CA is proposed together with required data systems, in particular metasearch engine usage in an unified way. It is confirmed that the proposed cell based GIS as CA has flexible usage of the attribute information that is attached to the cell in concert with location information and does work for disaster spreading simulation and prediction.

  13. An improved version of the MICROX-2 code

    Energy Technology Data Exchange (ETDEWEB)

    Mathews, D. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-11-01

    The MICROX-2 code prepares broad group neutron cross sections for use in diffusion- and/or transport-theory codes from an input library of fine group and pointwise cross sections. The neutron weighting spectrum is obtained by solving the B{sub 1} neutron balance equations at about 10000 energies in a one-dimensional (planar, spherical or cylindrical), two-region unit cell. The regions are coupled by collision probabilities based upon spatially flat neutron emission. Energy dependent Dancoff factors and bucklings correct the one-dimensional calculations for multi-dimensional lattice effects. A critical buckling search option is also included. The inner region may include two different types of fuel particles (grains). This report describes the present PSI FORTRAN 90 version of the MICROX-2 code which operates on CRAY computers and IBM PC`s. The equations which are solved in the various energy ranges are given along with descriptions of various changes that have been made in the present PSI version of the code. A completely re-written description of the user input is also included. (author) 7 figs., 4 tabs., 59 refs.

  14. The energy spectrum of the 'runaway' electrons from a high voltage pulsed discharge

    International Nuclear Information System (INIS)

    Ruset, C.

    1985-01-01

    Some experimental results are presented on the influence of the pressure upon the energy spectrum of the runaway electrons generated into a pulsed high voltage argon discharge. These electrons enter a state of continuous acceleration between two collisions with rapidly increasing free path. The applied discharge current varies from 10 to 300 A, the pulse time is about 800 ns. Relativistic effects are taken into consideration. Theoretical explanation is based on the pnenomenon of electron spreading on plasma oscillations. (D.Gy.)

  15. Codeword Structure Analysis for LDPC Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Hua Zhou

    2015-12-01

    Full Text Available The codewords of a low-density parity-check (LDPC convolutional code (LDPC-CC are characterised into structured and non-structured. The number of the structured codewords is dominated by the size of the polynomial syndrome former matrix H T ( D , while the number of the non-structured ones depends on the particular monomials or polynomials in H T ( D . By evaluating the relationship of the codewords between the mother code and its super codes, the low weight non-structured codewords in the super codes can be eliminated by appropriately choosing the monomials or polynomials in H T ( D , resulting in improved distance spectrum of the mother code.

  16. Non-cooperative detection of weak spread-spectrum signals in AWGN

    CSIR Research Space (South Africa)

    Vlok, JD

    2012-11-01

    Full Text Available . The average execution time was measured by counting the number of processing cycles required by the section of C code that calculates the test statistic. The implementation of technique 1 was also enhanced using basic linear algebra sub- 0 0.2 0.4 0... N? n=1 d2n ] cT c = cT c (9) which is a positive-semidefinite (and by definition symmetric) matrix [13]. The simplification in (9) follows from the fact that d2n = 1 for all values of n. By performing elementary row operations on R(X0) it can...

  17. A network model for Ebola spreading.

    Science.gov (United States)

    Rizzo, Alessandro; Pedalino, Biagio; Porfiri, Maurizio

    2016-04-07

    The availability of accurate models for the spreading of infectious diseases has opened a new era in management and containment of epidemics. Models are extensively used to plan for and execute vaccination campaigns, to evaluate the risk of international spreadings and the feasibility of travel bans, and to inform prophylaxis campaigns. Even when no specific therapeutical protocol is available, as for the Ebola Virus Disease (EVD), models of epidemic spreading can provide useful insight to steer interventions in the field and to forecast the trend of the epidemic. Here, we propose a novel mathematical model to describe EVD spreading based on activity driven networks (ADNs). Our approach overcomes the simplifying assumption of homogeneous mixing, which is central to most of the mathematically tractable models of EVD spreading. In our ADN-based model, each individual is not bound to contact every other, and its network of contacts varies in time as a function of an activity potential. Our model contemplates the possibility of non-ideal and time-varying intervention policies, which are critical to accurately describe EVD spreading in afflicted countries. The model is calibrated from field data of the 2014 April-to-December spreading in Liberia. We use the model as a predictive tool, to emulate the dynamics of EVD in Liberia and offer a one-year projection, until December 2015. Our predictions agree with the current vision expressed by professionals in the field, who consider EVD in Liberia at its final stage. The model is also used to perform a what-if analysis to assess the efficacy of timely intervention policies. In particular, we show that an earlier application of the same intervention policy would have greatly reduced the number of EVD cases, the duration of the outbreak, and the infrastructures needed for the implementation of the intervention. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Simulation of electron, positron and Bremsstrahlung spectrum generated due to electromagnetic cascade by 2.5 GeV electron hitting lead target using FLUKA code

    International Nuclear Information System (INIS)

    Sahani, P.K.; Dev, Vipin; Haridas, G.; Thakkar, K.K.; Singh, Gurnam; Sarkar, P.K.; Sharma, D.N.

    2009-01-01

    INDUS-2 is a high energy electron accelerator facility where electrons are accelerated in circular ring up to maximum energy 2.5 GeV, to generate synchrotron radiation. During normal operation of the machine a fraction of these electrons is lost, which interact with the accelerator structures and components like vacuum chamber and residual gases in the cavity and hence generates significant amount of Bremsstrahlung radiation. The Bremsstrahlung radiation is highly dependent on the incident electron energy, target material and its thickness. The Bremsstrahlung radiation dominates the radiation environment in such electron storage rings. Because of its broad spectrum extending up to incident electron energy and pulsed nature, it is very difficult to segregate the Bremsstrahlung component from the mixed field environment in accelerators. With the help of FLUKA Monte Carlo code, Bremsstrahlung spectrum generated from 2.5 GeV electron on bombardment of high Z lead target is simulated. To study the variation in Bremsstrahlung spectrum on target thickness, lead targets of 3, 6, 9, 12, 15, 18 mm thickness was used. The energy spectrum of emerging electron and positron is also simulated. The study suggests that as the target thickness increases, the emergent Bremsstrahlung photon fluence increases. With increase in the target thickness Bremsstrahlung photons in the spectrum dominate the low energy part and degrade in high energy part. The electron and positron spectra also extend up to incident electron energy. (author)

  19. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  20. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  1. Implementation Of Code And Carrier Tracking Loops For Software GPS Receivers

    Directory of Open Access Journals (Sweden)

    Win Kay Khaing

    2015-06-01

    Full Text Available Abstract GPS is playing in very important role in our modern mobile societies. Software approach is very flexible rather than the traditional hardware receivers. The soft-GPS receiver includes two portions hardware and software. In hardware portion an antenna filter down-converter from RF Radio Frequency to IF Intermediate Frequency and an ADC Analog to Digital Converter are included. In software portion signal processing such as acquisition tracking and navigation that runs on general purpose processor is included. The GPS signal is taken from N-FUELS Full Educational Library of Signals for Navigation signal simulator. The heart of soft-GPS receiver is the synchronization processes such as acquisition and tracking. In tracking there are two main loops for code and carrier tracking. The objective of this paper is to analyse and find the optimum discriminator function for the code tracking loop in soft-GPS receivers. The delay lock loop DLL is a well-known technique to track the codes for GNSS spread spectrum systems. This paper also presents non-coherent square law DLLs and the impacts of some parameters on DLL discriminators such as number of samples per chip early-late spacing different C No values where C denotes the signal power and No is the noise spectral density and the impact of with or without front-end device. The results of discriminator outputs are illustrated by using S-curves. Testing results with the real GPS signal are also described. This optimized discriminator functions can be implemented in any soft-GPS receivers.

  2. Golay sequences coded coherent optical OFDM for long-haul transmission

    Science.gov (United States)

    Qin, Cui; Ma, Xiangrong; Hua, Tao; Zhao, Jing; Yu, Huilong; Zhang, Jian

    2017-09-01

    We propose to use binary Golay sequences in coherent optical orthogonal frequency division multiplexing (CO-OFDM) to improve the long-haul transmission performance. The Golay sequences are generated by binary Reed-Muller codes, which have low peak-to-average power ratio and certain error correction capability. A low-complexity decoding algorithm for the Golay sequences is then proposed to recover the signal. Under same spectral efficiency, the QPSK modulated OFDM with binary Golay sequences coding with and without discrete Fourier transform (DFT) spreading (DFTS-QPSK-GOFDM and QPSK-GOFDM) are compared with the normal BPSK modulated OFDM with and without DFT spreading (DFTS-BPSK-OFDM and BPSK-OFDM) after long-haul transmission. At a 7% forward error correction code threshold (Q2 factor of 8.5 dB), it is shown that DFTS-QPSK-GOFDM outperforms DFTS-BPSK-OFDM by extending the transmission distance by 29% and 18%, in non-dispersion managed and dispersion managed links, respectively.

  3. Effects of rewiring strategies on information spreading in complex dynamic networks

    Science.gov (United States)

    Ally, Abdulla F.; Zhang, Ning

    2018-04-01

    Recent advances in networks and communication services have attracted much interest to understand information spreading in social networks. Consequently, numerous studies have been devoted to provide effective and accurate models for mimicking information spreading. However, knowledge on how to spread information faster and more widely remains a contentious issue. Yet, most existing works are based on static networks which limit the reality of dynamism of entities that participate in information spreading. Using the SIR epidemic model, this study explores and compares effects of two rewiring models (Fermi-Dirac and Linear functions) on information spreading in scale free and small world networks. Our results show that for all the rewiring strategies, the spreading influence replenishes with time but stabilizes in a steady state at later time-steps. This means that information spreading takes-off during the initial spreading steps, after which the spreading prevalence settles toward its equilibrium, with majority of the population having recovered and thus, no longer affecting the spreading. Meanwhile, rewiring strategy based on Fermi-Dirac distribution function in one way or another impedes the spreading process, however, the structure of the networks mimic the spreading, even with a low spreading rate. The worst case can be when the spreading rate is extremely small. The results emphasize that despite a big role of such networks in mimicking the spreading, the role of the parameters cannot be simply ignored. Apparently, the probability of giant degree neighbors being informed grows much faster with the rewiring strategy of linear function compared to that of Fermi-Dirac distribution function. Clearly, rewiring model based on linear function generates the fastest spreading across the networks. Therefore, if we are interested in speeding up the spreading process in stochastic modeling, linear function may play a pivotal role.

  4. FAST: An advanced code system for fast reactor transient analysis

    International Nuclear Information System (INIS)

    Mikityuk, Konstantin; Pelloni, Sandro; Coddington, Paul; Bubelis, Evaldas; Chawla, Rakesh

    2005-01-01

    One of the main goals of the FAST project at PSI is to establish a unique analytical code capability for the core and safety analysis of advanced critical (and sub-critical) fast-spectrum systems for a wide range of different coolants. Both static and transient core physics, as well as the behaviour and safety of the power plant as a whole, are studied. The paper discusses the structure of the code system, including the organisation of the interfaces and data exchange. Examples of validation and application of the individual programs, as well as of the complete code system, are provided using studies carried out within the context of designs for experimental accelerator-driven, fast-spectrum systems

  5. Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling

    Directory of Open Access Journals (Sweden)

    Ertürk Sarp

    2007-01-01

    Full Text Available This paper presents warped discrete cosine transform (WDCT-based low bit-rate block coding using image downsampling. While WDCT aims to improve the performance of conventional DCT by frequency warping, the WDCT has only been applicable to high bit-rate coding applications because of the overhead required to define the parameters of the warping filter. Recently, low bit-rate block coding based on image downsampling prior to block coding followed by upsampling after the decoding process is proposed to improve the compression performance for low bit-rate block coders. This paper demonstrates that a superior performance can be achieved if WDCT is used in conjunction with image downsampling-based block coding for low bit-rate applications.

  6. Coordination analysis of players' distribution in football using cross-correlation and vector coding techniques.

    Science.gov (United States)

    Moura, Felipe Arruda; van Emmerik, Richard E A; Santana, Juliana Exel; Martins, Luiz Eduardo Barreto; Barros, Ricardo Machado Leite de; Cunha, Sergio Augusto

    2016-12-01

    The purpose of this study was to investigate the coordination between teams spread during football matches using cross-correlation and vector coding techniques. Using a video-based tracking system, we obtained the trajectories of 257 players during 10 matches. Team spread was calculated as functions of time. For a general coordination description, we calculated the cross-correlation between the signals. Vector coding was used to identify the coordination patterns between teams during offensive sequences that ended in shots on goal or defensive tackles. Cross-correlation showed that opponent teams have a tendency to present in-phase coordination, with a short time lag. During offensive sequences, vector coding results showed that, although in-phase coordination dominated, other patterns were observed. We verified that during the early stages, offensive sequences ending in shots on goal present greater anti-phase and attacking team phase periods, compared to sequences ending in tackles. Results suggest that the attacking team may seek to present a contrary behaviour of its opponent (or may lead the adversary behaviour) in the beginning of the attacking play, regarding to the distribution strategy, to increase the chances of a shot on goal. The techniques allowed detecting the coordination patterns between teams, providing additional information about football dynamics and players' interaction.

  7. [Research on the method of copper converting process determination based on emission spectrum analysis].

    Science.gov (United States)

    Li, Xian-xin; Liu, Wen-qing; Zhang, Yu-jun; Si, Fu-qi; Dou, Ke; Wang, Feng-ping; Huang, Shu-hua; Fang, Wu; Wang, Wei-qiang; Huang, Yong-feng

    2012-05-01

    A method of copper converting process determination based on PbO/PbS emission spectrum analysis was described. According to the known emission spectrum of gas molecules, the existence of PbO and PbS was confirmed in the measured spectrum. Through the field experiment it was determined that the main emission spectrum of the slag stage was from PbS, and the main emission spectrum of the copper stage was from PbO. The relative changes in PbO/PbS emission spectrum provide the method of copper converting process determination. Through using the relative intensity in PbO/PbS emission spectrum the copper smelting process can be divided into two different stages, i.e., the slag stage (S phase) and the copper stage (B phase). In a complete copper smelting cycle, a receiving telescope of appropriate view angle aiming at the converter flame, after noise filtering on the PbO/PbS emission spectrum, the process determination agrees with the actual production. Both the theory and experiment prove that the method of copper converting process determination based on emission spectrum analysis is feasible.

  8. COMBINE7.0 - A Portable ENDF/B-VII.0 Based Neutron Spectrum and Cross-Section Generation Program

    Energy Technology Data Exchange (ETDEWEB)

    Woo Y. Yoon; David W. Nigg

    2008-09-01

    COMBINE7.0 is a FORTRAN 90 computer code that generates multigroup neutron constants for use in the deterministic diffusion and transport theory neutronics analysis. The cross-section database used by COMBINE7.0 is derived from the Evaluated Nuclear Data Files (ENDF/B-VII.0). The neutron energy range covered is from 20 MeV to 1.0E-5 eV. The Los Alamos National Laboratory NJOY code is used as the processing code to generate a 167 finegroup cross-section library in MATXS format for Bondarenko self-shielding treatment. Resolved resonance parameters are extracted from ENDF/B-VII.0 File 2 for a separate library to be used in an alternate Nordheim self-shielding treatment in the resolved resonance energy range. The equations solved for energy dependent neutron spectrum in the 167 fine-group structure are the B-3 or B-1 approximations to the transport equation. The fine group cross sections needed for the spectrum calculation are first prepared by Bondarenko selfshielding interpolation in terms of background cross section and temperature. The geometric lump effect, when present, is accounted for by augmenting the background cross section. Nordheim self-shielded fine group cross sections for a material having resolved resonance parameters overwrite correspondingly the existing self-shielded fine group cross sections when this option is used. The fine group cross sections in the thermal energy range are replaced by those selfshielded with the Amouyal/Benoist/Horowitz method in the three region geometry when this option is requested. COMBINE7.0 coalesces fine group cross sections into broad group macroscopic and microscopic constants. The coalescing is performed by utilizing fine-group fluxes and/or currents obtained by spectrum calculation as the weighting functions. The multigroup constant may be output in any of several standard formats including ANISN 14** free format, CCCC ISOTXS format, and AMPX working library format. ANISN-PC, a onedimensional, discrete

  9. COMBINE7.0 - A Portable ENDF/B-VII.0 Based Neutron Spectrum and Cross-Section Generation Program

    International Nuclear Information System (INIS)

    Yoon, Woo Y.; Nigg, David W.

    2008-01-01

    COMBINE7.0 is a FORTRAN 90 computer code that generates multigroup neutron constants for use in the deterministic diffusion and transport theory neutronics analysis. The cross-section database used by COMBINE7.0 is derived from the Evaluated Nuclear Data Files (ENDF/B-VII.0). The neutron energy range covered is from 20 MeV to 1.0E-5 eV. The Los Alamos National Laboratory NJOY code is used as the processing code to generate a 167 finegroup cross-section library in MATXS format for Bondarenko self-shielding treatment. Resolved resonance parameters are extracted from ENDF/B-VII.0 File 2 for a separate library to be used in an alternate Nordheim self-shielding treatment in the resolved resonance energy range. The equations solved for energy dependent neutron spectrum in the 167 fine-group structure are the B-3 or B-1 approximations to the transport equation. The fine group cross sections needed for the spectrum calculation are first prepared by Bondarenko selfshielding interpolation in terms of background cross section and temperature. The geometric lump effect, when present, is accounted for by augmenting the background cross section. Nordheim self-shielded fine group cross sections for a material having resolved resonance parameters overwrite correspondingly the existing self-shielded fine group cross sections when this option is used. The fine group cross sections in the thermal energy range are replaced by those selfshielded with the Amouyal/Benoist/Horowitz method in the three region geometry when this option is requested. COMBINE7.0 coalesces fine group cross sections into broad group macroscopic and microscopic constants. The coalescing is performed by utilizing fine-group fluxes and/or currents obtained by spectrum calculation as the weighting functions. The multigroup constant may be output in any of several standard formats including ANISN 14** free format, CCCC ISOTXS format, and AMPX working library format. ANISN-PC, a onedimensional, discrete

  10. Functional properties of a new spread based on olive oil and honeybees

    Directory of Open Access Journals (Sweden)

    Asma Tekiki

    2018-01-01

    Full Text Available a new alimentary concept has been developed since the 80’s. This one is called “functional food”.  In this context, the olive oil and honey are traditionally used in their initial state as a basic food. They are considered as a potential source of new bioactive products from which we can formulate several functional foods. This work will focus on the elaboration of a new spread of honey and olive oil using beeswax as an emulsifier. Physical-chemical characterization, antioxidant and antibacterial activity were evaluated. As for the phenols content, spreads prepared from thyme honey has the highest content (337 mg GAE/kg compared to other spreads. The antioxidant activity was evaluated by three different methods namely: DPPH test, ABTS + test and the iron reduction method (FRAP which proves that this last has a higher activity than the other spreads (EC50 of 70 mg /L using DPPH, EC50 of 20 mg /L using ABTS. An agar-well diffusion assay was used to assess the activity of honeys against seven bacteria strains. All prepared spreads honey samples showed highest antibacterial activity against all bacterial strains tested (diameter of ZI > 20mm. Hence, we note that our new spread proved by excellence to be a functional food due to the high content of phenols and the important antibacterial and antioxidant activities.

  11. Determining of the intermediate neutron spectrum in fast neutron field at the RB reactor

    International Nuclear Information System (INIS)

    Sokcic-Kostic, M.; Pesic, M.; Antic, D.

    1987-01-01

    The activation method for intermediate neutron spectrum determination is given in this paper. The intermediate neutron spectrum in experimental fuel channel (EFC) at the RB reactor is determined om the basis of this method. The results of measurements are treated with PRAG code and will be treated with KRIFIT and TENET codes that are also developed. (author)

  12. Monte Carlo Depletion with Critical Spectrum for Assembly Group Constant Generation

    International Nuclear Information System (INIS)

    Park, Ho Jin; Joo, Han Gyu; Shim, Hyung Jin; Kim, Chang Hyo

    2010-01-01

    The conventional two-step procedure has been used in practical nuclear reactor analysis. In this procedure, a deterministic assembly transport code such as HELIOS and CASMO is normally to generate multigroup flux distribution to be used in few-group cross section generation. Recently there are accuracy issues related with the resonance treatment or the double heterogeneity (DH) treatment for VHTR fuel blocks. In order to mitigate the accuracy issues, Monte Carlo (MC) methods can be used as an alternative way to generate few-group cross sections because the accuracy of the MC calculations benefits from its ability to use continuous energy nuclear data and detailed geometric information. In an earlier work, the conventional methods of obtaining multigroup cross sections and the critical spectrum are implemented into the McCARD Monte Carlo code. However, it was not complete in that the critical spectrum is not reflected in the depletion calculation. The purpose of this study is to develop a method to apply the critical spectrum to MC depletion calculations to correct for the leakage effect in the depletion calculation and then to examine the MC based group constants within the two-step procedure by comparing the two-step solution with the direct whole core MC depletion result

  13. Spreading convulsions, spreading depolarization and epileptogenesis in human cerebral cortex

    DEFF Research Database (Denmark)

    Dreier, Jens P; Major, Sebastian; Pannek, Heinz-Wolfgang

    2012-01-01

    Spreading depolarization of cells in cerebral grey matter is characterized by massive ion translocation, neuronal swelling and large changes in direct current-coupled voltage recording. The near-complete sustained depolarization above the inactivation threshold for action potential generating...... stimulations. Eventually, epileptic field potentials were recorded during the period that had originally seen spreading depression of activity. Such spreading convulsions are characterized by epileptic field potentials on the final shoulder of the large slow potential change of spreading depolarization. We...

  14. Trellis and turbo coding iterative and graph-based error control coding

    CERN Document Server

    Schlegel, Christian B

    2015-01-01

    This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.

  15. Spectrum Handoffs Based on Preemptive Repeat Priority Queue in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Xiaolong Yang

    2016-07-01

    Full Text Available Cognitive radio can significantly improve the spectrum efficiency, and spectrum handoff is considered as an important functionality to guarantee the quality of service (QoS of primary users (PUs and the continuity of data transmission of secondary users (SUs. In this paper, we propose an analytical framework based on a preemptive repeat identical (PRI M/G/1 queuing network model to characterize spectrum handoff behaviors with general service time distribution of both primary and secondary connections, multiple interruptions and transmission delay resulting from the appearance of primary connections. Then, we derive the close-expression of the extended data delivery and the system sojourn time in both staying and changing scenarios. In addition, based on analysis of spectrum handoff behaviors resulting from multiple interruptions caused by the appearance of the primary connections, we investigate the traffic-adaptive policy, by which the considered SU will optimally adjust its handoff spectrum policy. Moreover, we investigate the admissible region and provide the reference for designing the admission control rule for the arriving secondary connection requests. Finally, simulation results verify that our proposed analytical framework is reasonable and can provide the reference for executing the optimal spectrum handoff strategy and designing the admission control rule for the SU in cognitive radio networks.

  16. Context based Coding of Quantized Alpha Planes for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2002-01-01

    In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties....... Comparisons in terms of rate and distortion are provided, showing that the proposed coding scheme for still alpha planes is better than the algorithms for I-frames used in MPEG-4....

  17. FPGA-based RF spectrum merging and adaptive hopset selection

    Science.gov (United States)

    McLean, R. K.; Flatley, B. N.; Silvius, M. D.; Hopkinson, K. M.

    The radio frequency (RF) spectrum is a limited resource. Spectrum allotment disputes stem from this scarcity as many radio devices are confined to a fixed frequency or frequency sequence. One alternative is to incorporate cognition within a reconfigurable radio platform, therefore enabling the radio to adapt to dynamic RF spectrum environments. In this way, the radio is able to actively sense the RF spectrum, decide, and act accordingly, thereby sharing the spectrum and operating in more flexible manner. In this paper, we present a novel solution for merging many distributed RF spectrum maps into one map and for subsequently creating an adaptive hopset. We also provide an example of our system in operation, the result of which is a pseudorandom adaptive hopset. The paper then presents a novel hardware design for the frequency merger and adaptive hopset selector, both of which are written in VHDL and implemented as a custom IP core on an FPGA-based embedded system using the Xilinx Embedded Development Kit (EDK) software tool. The design of the custom IP core is optimized for area, and it can process a high-volume digital input via a low-latency circuit architecture. The complete embedded system includes the Xilinx PowerPC microprocessor, UART serial connection, and compact flash memory card IP cores, and our custom map merging/hopset selection IP core, all of which are targeted to the Virtex IV FPGA. This system is then incorporated into a cognitive radio prototype on a Rice University Wireless Open Access Research Platform (WARP) reconfigurable radio.

  18. An Adaptation of the HELIOS/MASTER Code System to the Analysis of VHTR Cores

    International Nuclear Information System (INIS)

    Noh, Jae Man; Lee, Hyun Chul; Kim, Kang Seog; Kim, Yong Hee

    2006-01-01

    KAERI is developing a new computer code system for an analysis of VHTR cores based on the existing HELIOS/MASTER code system which was originally developed for a LWR core analysis. In the VHTR reactor physics, there are several unique neutronic characteristics that cannot be handled easily by the conventional computer code system applied for the LWR core analysis. Typical examples of such characteristics are a double heterogeneity problem due to the particulate fuels, the effects of a spectrum shift and a thermal up-scattering due to the graphite moderator, and a strong fuel/reflector interaction, etc. In order to facilitate an easy treatment of such characteristics, we developed some methodologies for the HELIOS/MASTER code system and tested their applicability to the VHTR core analysis

  19. Lossy to lossless object-based coding of 3-D MRI data.

    Science.gov (United States)

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  20. Validation of fast-ion D-alpha spectrum measurements during EAST neutral-beam heated plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Huang, J., E-mail: juan.huang@ipp.ac.cn; Wu, C. R.; Hou, Y. M.; Chang, J. F.; Ding, S. Y.; Chen, Y. J.; Jin, Z.; Xu, Z.; Gao, W.; Wang, J. F.; Lyu, B.; Zang, Q.; Zhong, G. Q.; Hu, L.; Wan, B. [Institute of Plasma Physics, Chinese Academy of Sciences, P.O. Box 1126, 230031 Hefei, Anhui (China); Heidbrink, W. W.; Stagner, L.; Zhu, Y. B. [University of California, Irvine, California 92697 (United States); Hellermann, M. G. von [Diagnostic Team, ITER Organization, Route de Vinon-sur-Verdon 13067 St. Paul Lez Durance (France)

    2016-11-15

    To investigate the fast ion behavior, a fast ion D-alpha (FIDA) diagnostic system has been installed on EAST. Fast ion features can be inferred from the Doppler shifted spectrum of Balmer-alpha light from energetic hydrogenic atoms. This paper will focus on the validation of FIDA measurements performed using MHD-quiescent discharges in 2015 campaign. Two codes have been applied to calculate the D{sub α} spectrum: one is a Monte Carlo code, Fortran 90 version FIDASIM, and the other is an analytical code, Simulation of Spectra (SOS). The predicted SOS fast-ion spectrum agrees well with the measurement; however, the level of fast-ion part from FIDASIM is lower. The discrepancy is possibly due to the difference between FIDASIM and SOS velocity distribution function. The details will be presented in the paper to primarily address comparisons of predicted and observed spectrum shapes/amplitudes.

  1. Verification of Compton scattering spectrum of a 662 keV photon beam scattered on a cylindrical steel target using MCNP5 code

    International Nuclear Information System (INIS)

    Thanh, Tran Thien; Nguyen, Vo Hoang; Chuong, Huynh Dinh; Tran, Le Bao; Tam, Hoang Duc; Binh, Nguyen Thi; Tao, Chau Van

    2015-01-01

    This article focuses on the possible application of a "1"3"7Cs low-radioactive source (5 mCi) and a NaI(Tl) detector for measuring the saturation thickness of solid cylindrical steel targets. In order to increase the reliability of the obtained experimental results and to verify the detector response function of Compton scattering spectrum, simulation using Monte Carlo N-particle (MCNP5) code is performed. The obtained results are in good agreement with the response functions of the simulation scattering and experimental scattering spectra. On the basis of such spectra, the saturation depth of a steel cylinder is determined by experiment and simulation at about 27 mm using gamma energy of 662 keV ("1"3"7Cs) at a scattering angle of 120°. This study aims at measuring the diameter of solid cylindrical objects by gamma-scattering technique. - Highlights: • This study aims a possible application a "1"3"7Cs low-radioactive source (5 mCi) and a NaI(Tl) detector for measuring the saturation thickness of solid cylindrical steel targets by gamma-scattering technique. • Monte Carlo N-particle (MCNP5) code is performed to verify on the detector response function of Compton scattering spectrum. • The results show a good agreement in response function of the experimental and simulation scattering spectra. • The saturation depth of a steel cylinder is determined by experiment and simulation at about 27 mm using gamma energy of 662 keV ("1"3"7Cs) at a scattering angle of 120°.

  2. A novel construction method of QC-LDPC codes based on CRT for optical communications

    Science.gov (United States)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  3. High-frequency combination coding-based steady-state visual evoked potential for brain computer interface

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Xu, Guang-Hua [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710054 (China)

    2015-03-10

    This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n{sup n} with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method.

  4. High-frequency combination coding-based steady-state visual evoked potential for brain computer interface

    International Nuclear Information System (INIS)

    Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing; Xu, Guang-Hua

    2015-01-01

    This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n n with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method

  5. Burnup calculation code system COMRAD96

    International Nuclear Information System (INIS)

    Suyama, Kenya; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu.

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, 'Cross Section Treatment', 'Generation and Depletion Calculation', and 'Post Process'. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the γ Spectrum on a terminal. This report is the general description and user's manual of COMRAD96. (author)

  6. The method in γ spectrum analysis with artificial neural network based on MATLAB

    International Nuclear Information System (INIS)

    Bai Lixin; Zhang Yiyun; Xu Jiayun; Wu Liping

    2003-01-01

    Analyzing γ spectrum with artificial neural network have the advantage of using the information of whole spectrum and having high analyzing precision. A convenient realization based on MATLAB was present in this

  7. Numerical Simulations of Spread Characteristics of Toxic Cyanide in the Danjiangkou Reservoir in China under the Effects of Dam Cooperation

    Directory of Open Access Journals (Sweden)

    Libin Chen

    2014-01-01

    Full Text Available Many accidents of releasing toxic pollutants into surface water happen each year in the world. It is believed that dam cooperation can affect flow field in reservoir and then can be applied to avoiding and reducing spread speed of toxic pollutants to drinking water intake mouth. However, few studies investigated the effects of dam cooperation on the spread characteristics of toxic pollutants in reservoir, especially the source reservoir for water diversion with more than one dam. The Danjiangkou Reservoir is the source reservoir of the China’ South-to-North Water Diversion Middle Route Project. The human activities are active within this reservoir basin and cyanide-releasing accident once happened in upstream inflow. In order to simulate the spread characteristics of cyanide in the reservoir in the condition of dam cooperation, a three-dimensional water quality model based on the Environmental Fluid Dynamics Code (EFDC has been built and put into practice. The results indicated that cooperation of two dams of the Danjiangkou Reservoir could be applied to avoiding and reducing the spread speed of toxic cyanide in the reservoir directing to the water intake mouth for water diversions.

  8. Bursty communication patterns facilitate spreading in a threshold-based epidemic dynamics.

    Science.gov (United States)

    Takaguchi, Taro; Masuda, Naoki; Holme, Petter

    2013-01-01

    Records of social interactions provide us with new sources of data for understanding how interaction patterns affect collective dynamics. Such human activity patterns are often bursty, i.e., they consist of short periods of intense activity followed by long periods of silence. This burstiness has been shown to affect spreading phenomena; it accelerates epidemic spreading in some cases and slows it down in other cases. We investigate a model of history-dependent contagion. In our model, repeated interactions between susceptible and infected individuals in a short period of time is needed for a susceptible individual to contract infection. We carry out numerical simulations on real temporal network data to find that bursty activity patterns facilitate epidemic spreading in our model.

  9. Bursty communication patterns facilitate spreading in a threshold-based epidemic dynamics.

    Directory of Open Access Journals (Sweden)

    Taro Takaguchi

    Full Text Available Records of social interactions provide us with new sources of data for understanding how interaction patterns affect collective dynamics. Such human activity patterns are often bursty, i.e., they consist of short periods of intense activity followed by long periods of silence. This burstiness has been shown to affect spreading phenomena; it accelerates epidemic spreading in some cases and slows it down in other cases. We investigate a model of history-dependent contagion. In our model, repeated interactions between susceptible and infected individuals in a short period of time is needed for a susceptible individual to contract infection. We carry out numerical simulations on real temporal network data to find that bursty activity patterns facilitate epidemic spreading in our model.

  10. Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  11. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  12. Comparison of Multipole Stimulus Configurations With Respect to Loudness and Spread of Excitation.

    Science.gov (United States)

    Vellinga, Dirk; Briaire, Jeroen Johannes; van Meenen, David Michael Paul; Frijns, Johannes Hubertus Maria

    Current spread is a substantial limitation of speech coding strategies in cochlear implants. Multipoles have the potential to reduce current spread and thus generate more discriminable pitch percepts. The difficulty with multipoles is reaching sufficient loudness. The primary goal was to compare the loudness characteristics and spread of excitation (SOE) of three types of phased array stimulation, a novel multipole, with three more conventional configurations. Fifteen postlingually deafened cochlear implant users performed psychophysical experiments addressing SOE, loudness scaling, loudness threshold, loudness balancing, and loudness discrimination. Partial tripolar stimulation (pTP, σ = 0.75), TP, phased array with 16 (PA16) electrodes, and restricted phased array with five (PA5) and three (PA3) electrodes was compared with a reference monopolar stimulus. Despite a similar loudness growth function, there were considerable differences in current expenditure. The most energy efficient multipole was the pTP, followed by PA16 and PA5/PA3. TP clearly stood out as the least efficient one. Although the electric dynamic range was larger with multipolar configurations, the number of discriminable steps in loudness was not significantly increased. The SOE experiment could not demonstrate any difference between the stimulation strategies. The loudness characteristics all five multipolar configurations tested are similar. Because of their higher energy efficiency, pTP and PA16 are the most favorable candidates for future testing in clinical speech coding strategies.

  13. Wavelet based multicarrier code division multiple access ...

    African Journals Online (AJOL)

    This paper presents the study on Wavelet transform based Multicarrier Code Division Multiple Access (MC-CDMA) system for a downlink wireless channel. The performance of the system is studied for Additive White Gaussian Noise Channel (AWGN) and slowly varying multipath channels. The bit error rate (BER) versus ...

  14. a Context-Aware Tourism Recommender System Based on a Spreading Activation Method

    Science.gov (United States)

    Bahramian, Z.; Abbaspour, R. Ali; Claramunt, C.

    2017-09-01

    Users planning a trip to a given destination often search for the most appropriate points of interest location, this being a non-straightforward task as the range of information available is very large and not very well structured. The research presented by this paper introduces a context-aware tourism recommender system that overcomes the information overload problem by providing personalized recommendations based on the user's preferences. It also incorporates contextual information to improve the recommendation process. As previous context-aware tourism recommender systems suffer from a lack of formal definition to represent contextual information and user's preferences, the proposed system is enhanced using an ontology approach. We also apply a spreading activation technique to contextualize user preferences and learn the user profile dynamically according to the user's feedback. The proposed method assigns more effect in the spreading process for nodes which their preference values are assigned directly by the user. The results show the overall performance of the proposed context-aware tourism recommender systems by an experimental application to the city of Tehran.

  15. A CONTEXT-AWARE TOURISM RECOMMENDER SYSTEM BASED ON A SPREADING ACTIVATION METHOD

    Directory of Open Access Journals (Sweden)

    Z. Bahramian

    2017-09-01

    Full Text Available Users planning a trip to a given destination often search for the most appropriate points of interest location, this being a non-straightforward task as the range of information available is very large and not very well structured. The research presented by this paper introduces a context-aware tourism recommender system that overcomes the information overload problem by providing personalized recommendations based on the user’s preferences. It also incorporates contextual information to improve the recommendation process. As previous context-aware tourism recommender systems suffer from a lack of formal definition to represent contextual information and user’s preferences, the proposed system is enhanced using an ontology approach. We also apply a spreading activation technique to contextualize user preferences and learn the user profile dynamically according to the user’s feedback. The proposed method assigns more effect in the spreading process for nodes which their preference values are assigned directly by the user. The results show the overall performance of the proposed context-aware tourism recommender systems by an experimental application to the city of Tehran.

  16. Experimental investigation of particulate debris spreading in a pool

    Energy Technology Data Exchange (ETDEWEB)

    Konovalenko, A., E-mail: kono@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology (KTH) , Roslagstullsbacken 21, Stockholm 106 91 (Sweden); Basso, S., E-mail: simoneb@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology (KTH) , Roslagstullsbacken 21, Stockholm 106 91 (Sweden); Kudinov, P., E-mail: pkudinov@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology (KTH) , Roslagstullsbacken 21, Stockholm 106 91 (Sweden); Yakush, S.E., E-mail: yakush@ipmnet.ru [Institute for Problems in Mechanics of the Russian Academy of Sciences, Ave. Vernadskogo 101 Bldg 1, Moscow 119526 (Russian Federation)

    2016-02-15

    Termination of severe accident progression by core debris cooling in a deep pool of water under reactor vessel is considered in several designs of light water reactors. However, success of this accident mitigation strategy is contingent upon the effectiveness of heat removal by natural circulation from the debris bed. It is assumed that a porous bed will be formed in the pool in the process of core melt fragmentation and quenching. Debris bed coolability depends on its properties and system conditions. The properties of the bed, including its geometry are the outcomes of the debris bed formation process. Spreading of the debris particles in the pool by two-phase turbulent flows induced by the heat generated in the bed can affect the shape of the bed and thus influence its coolability. The goal of this work is to provide experimental data on spreading of solid particles in the pool by large-scale two-phase flow. The aim is to provide data necessary for understanding of separate effects and for development and validation of models and codes. Validated codes can be then used for prediction of debris bed formation under prototypic severe accident conditions. In PDS-P (Particulate Debris Spreading in the Pool) experiments, air injection at the bottom of the test section is employed as a means to create large-scale flow in the pool in isothermal conditions. The test section is a rectangular tank with a 2D slice geometry, it has fixed width (72 mm), adjustable length (up to 1.5 m) and allows water filling to the depth of up to 1 m. Variable pool length and depth allows studying two-phase circulating flows of different characteristic sizes and patterns. The average void fraction in the pool is determined by video recording and subsequent image processing. Particles are supplied from the top of the facility above the water surface. Results of several series of PDS-P experiments are reported in this paper. The influence of the gas flow rate, pool dimensions, particle density

  17. Expected Forward Progress and Throughput of Multi-Hop Frequency-Hopped Spread-Spectrum Networks

    National Research Council Canada - National Science Library

    Gluck, Jeffrey W; Geraniotis, Evaggelos

    1987-01-01

    ...). The optimal average number of neighbors and transmission radius are derived for these cases when Reed-Solomon forward-error-control coding with minimum distance decoding or binary convolutional...

  18. FACTORS INFLUENCING YIELD SPREADS OF THE MALAYSIAN BONDS

    Directory of Open Access Journals (Sweden)

    Norliza Ahmad

    2009-01-01

    Full Text Available Malaysian bond market is developing rapidly but not much is understood in terms of macroeconomic factors that could influence the yield spread of the Ringgit Malaysian denominated bonds. Based on a multifactor model, this paper examines the impact of four macroeconomic factors namely: Kuala Lumpur Composite Index (KLCI, Industry Production Index (IPI, Consumer Price Index (CPI and interest rates (IR on bond yield spread of the Malaysian Government Securities (MGS and Corporate Bonds (CBs for a period from January 2001 to December 2008. The findings support the expected hypotheses that CPI and IR are the major drivers that influence the changes in MGS yield spreads. However IPI and KLCI have weak and no influence on MGS yield spreads respectively Whilst IR, CPI and IPI have significant influence on the yield spreads of CB1, CB2 and CB3, KLCI has significant influence only on the CB1 yield spread but not on CB2 and CB3 yield spreads.

  19. Development Of The Computer Code For Comparative Neutron Activation Analysis

    International Nuclear Information System (INIS)

    Purwadi, Mohammad Dhandhang

    2001-01-01

    The qualitative and quantitative chemical analysis with Neutron Activation Analysis (NAA) is an importance utilization of a nuclear research reactor, and this should be accelerated and promoted in application and its development to raise the utilization of the reactor. The application of Comparative NAA technique in GA Siwabessy Multi Purpose Reactor (RSG-GAS) needs special (not commercially available yet) soft wares for analyzing the spectrum of multiple elements in the analysis at once. The application carried out using a single spectrum software analyzer, and comparing each result manually. This method really degrades the quality of the analysis significantly. To solve the problem, a computer code was designed and developed for comparative NAA. Spectrum analysis in the code is carried out using a non-linear fitting method. Before the spectrum analyzed, it was passed to the numerical filter which improves the signal to noise ratio to do the deconvolution operation. The software was developed using the G language and named as PASAN-K The testing result of the developed software was benchmark with the IAEA spectrum and well operated with less than 10 % deviation

  20. Criticality qualification of a new Monte Carlo code for reactor core analysis

    International Nuclear Information System (INIS)

    Catsaros, N.; Gaveau, B.; Jaekel, M.; Maillard, J.; Maurel, G.; Savva, P.; Silva, J.; Varvayanni, M.; Zisis, Th.

    2009-01-01

    In order to accurately simulate Accelerator Driven Systems (ADS), the utilization of at least two computational tools is necessary (the thermal-hydraulic problem is not considered in the frame of this work), namely: (a) A High Energy Physics (HEP) code system dealing with the 'Accelerator part' of the installation, i.e. the computation of the spectrum, intensity and spatial distribution of the neutrons source created by (p, n) reactions of a proton beam on a target and (b) a neutronics code system, handling the 'Reactor part' of the installation, i.e. criticality calculations, neutron transport, fuel burn-up and fission products evolution. In the present work, a single computational tool, aiming to analyze an ADS in its integrity and also able to perform core analysis for a conventional fission reactor, is proposed. The code is based on the well qualified HEP code GEANT (version 3), transformed to perform criticality calculations. The performance of the code is tested against two qualified neutronics code systems, the diffusion/transport SCALE-CITATION code system and the Monte Carlo TRIPOLI code, in the case of a research reactor core analysis. A satisfactory agreement was exhibited by the three codes.

  1. Algorithms and computer codes for atomic and molecular quantum scattering theory

    International Nuclear Information System (INIS)

    Thomas, L.

    1979-01-01

    This workshop has succeeded in bringing up 11 different coupled equation codes on the NRCC computer, testing them against a set of 24 different test problems and making them available to the user community. These codes span a wide variety of methodologies, and factors of up to 300 were observed in the spread of computer times on specific problems. A very effective method was devised for examining the performance of the individual codes in the different regions of the integration range. Many of the strengths and weaknesses of the codes have been identified. Based on these observations, a hybrid code has been developed which is significantly superior to any single code tested. Thus, not only have the original goals been fully met, the workshop has resulted directly in an advancement of the field. All of the computer programs except VIVS are available upon request from the NRCC. Since an improved version of VIVS is contained in the hybrid program, VIVAS, it was not made available for distribution. The individual program LOGD is, however, available. In addition, programs which compute the potential energy matrices of the test problems are also available. The software library names for Tests 1, 2 and 4 are HEH2, LICO, and EN2, respectively

  2. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  3. Comparison of Americium-Beryllium neutron spectrum obtained using activation foil detectors and NE-213 spectrometer

    International Nuclear Information System (INIS)

    Sunny, Sunil; Subbaiah, K.V.; Selvakumaran, T.S.

    1999-01-01

    Neutron spectrum of Americium - Beryllium (α,n) source is measured with two different spectrometers vis-a-vis activation foils (foil detectors) and NE-213 organic scintillator. Activity induced in the foils is measured with 4π-β-γ sodium iodide detector by integrating counts under photo peak and the saturation activity is found by correcting to elapsed time before counting. The data on calculated activity is fed into the unfolding code, SAND-II to obtain neutron spectrum. In the case of organic scintillator, the pulse height spectrum is obtained using MCA and this is processed with unfolding code DUST in order to get neutron spectrum. The Americium - Beryllium (α,n) neutron spectrum thus obtained by two different methods is compared. It is inferred that the NE-213 scintillator spectrum is in excellent agreement with the values beyond 1MeV. Neutron spectrum obtained by activation foils depends on initial guess spectrum and is found to be in reasonable agreement with NE-213 spectrum. (author)

  4. Extending CANTUP code analysis to probabilistic evaluations

    International Nuclear Information System (INIS)

    Florea, S.

    2001-01-01

    The structural analysis with numerical methods based on final element method plays at present a central role in evaluations and predictions of structural systems which require safety and reliable operation in aggressive environmental conditions. This is the case too for the CANDU - 600 fuel channel, where besides the corrosive and thermal aggression upon the Zr97.5Nb2.5 pressure tubes, a lasting irradiation adds which has marked consequences upon the materials properties evolution. This results in an unavoidable spreading in the materials properties in time, affected by high uncertainties. Consequently, the deterministic evaluation with computation codes based on finite element method are supplemented by statistic and probabilistic methods of evaluation of the response of structural components. This paper reports the works on extending the thermo-mechanical evaluation of the fuel channel components in the frame of probabilistic structure mechanics based on statistical methods and developed upon deterministic CANTUP code analyses. CANTUP code was adapted from LAHEY 77 platform onto Microsoft Developer Studio - Fortran Power Station 4.0 platform. To test the statistical evaluation of the creeping behaviour of pressure tube, the value of longitudinal elasticity modulus (Young) was used, as random variable, with a normal distribution around value, as used in deterministic analyses. The influence of the random quantity upon the hog and effective stress developed in the pressure tube for to time values, specific to primary and secondary creep was studied. The results obtained after a five year creep, corresponding to the secondary creep are presented

  5. Spreading Depression, Spreading Depolarizations, and the Cerebral Vasculature

    DEFF Research Database (Denmark)

    Ayata, Cenk; Lauritzen, Martin

    2015-01-01

    Spreading depression (SD) is a transient wave of near-complete neuronal and glial depolarization associated with massive transmembrane ionic and water shifts. It is evolutionarily conserved in the central nervous systems of a wide variety of species from locust to human. The depolarization spreads...

  6. Experimental determination of spectral ratios and of neutrons energy spectrum in the fuel of the IPEN/MB-01 nuclear reactor

    International Nuclear Information System (INIS)

    Nunes, Beatriz Guimaraes

    2012-01-01

    This study aims to determine the spectral ratios and the neutron energy spectrum inside the fuel of IPEN/MB-01 Nuclear Reactor. These parameters are of great importance to accurately determine spectral physical parameters of nuclear reactors like reaction rates, fuel lifetime and also security parameters such as reactivity. For the experiment, activation detectors in the form of thin metal foils were introduced in a collapsible fuel rod. Then the rod was placed in the central position of the core which has a standard rectangular configuration of 26 x 28 fuel rods. There were used activation detectors from different elements such Au-197, U-238, Sc-45, Ni-58, Mg-24, Ti-47 and In-115 to cover a large range of the neutron energy spectrum. After the irradiation, the activation detectors were submitted to gamma spectrometry using a counting system with high purity Germanium, to obtain the reaction rates (saturation activity) per target nucleus. The spectral ratios were compared with calculated values obtained by the Monte Carlo method using the MCNP-4C code. The neutron energy spectrum was obtained inside the fuel rod using the SANDBP code with an input spectrum obtained by the MCNP-4C code, based on the saturation activity per target nucleus values of the activation detectors irradiated. (author)

  7. A method of loss free compression for the data of nuclear spectrum

    International Nuclear Information System (INIS)

    Sun Mingshan; Wu Shiying; Chen Yantao; Xu Zurun

    2000-01-01

    A new method of loss free compression based on the feature of the data of nuclear spectrum is provided, from which a practicable algorithm is successfully derived. A compression rate varying from 0.50 to 0.25 is obtained and the distribution of the processed data becomes even more suitable to be reprocessed by another compression such as Huffman Code to improve the compression rate

  8. APPLE-3: improvement of APPLE for neutron and gamma-ray flux, spectrum and reaction rate plotting code, and of its code manual

    International Nuclear Information System (INIS)

    Kawasaki, Hiromitu; Maki, Koichi; Seki, Yasushi.

    1991-03-01

    A code APPLE was produced in 1976 for calculating and plotting tritium breeding ratio and tritium production rate distributions. That code was improved as 'APPLE-2' in 1982, to calculate and plot not only tritium breeding ratio but also distributions of neutron and gamma-ray fluxes, their spectra, nuclear heating rates and other reaction rates, and dose rate distributions during operation and after shutdown in 1982. The code APPLE-2 can calculate and plot these nuclear properties derived from neutron and gamma-ray fluxes by ANISN (one dimensional transport code), DOT3.5 (two dimensional transport code) and MORSE (three dimensional Monte Carlo code). We revised the code APPLE-2 as 'APPLE-3' by adding many functions to the APPLE-2 code in accordance with users' requirements proposed in recent progress of fusion reaction nuclear design. With minor modification of APPLE-2, a number of inconsistencies have been found between the code manual and the input data in the code. In the present report, the new functions added to APPLE-2 and improved users' manual are explained. (author)

  9. Forecasting oil price movements with crack spread futures

    International Nuclear Information System (INIS)

    Murat, Atilim; Tokat, Ekin

    2009-01-01

    In oil markets, the crack spread refers to the crude-product price relationship. Refiners are major participants in oil markets and they are primarily exposed to the crack spread. In other words, refiner activity is substantially driven by the objective of protecting the crack spread. Moreover, oil consumers are active participants in the oil hedging market and they are frequently exposed to the crack spread. From another perspective, hedge funds are heavily using crack spread to speculate in oil markets. Based on the high volume of crack spread futures trading in oil markets, the question we want to raise is whether the crack spread futures can be a good predictor of oil price movements. We investigated first whether there is a causal relationship between the crack spread futures and the spot oil markets in a vector error correction framework. We found the causal impact of crack spread futures on spot oil market both in the long- and the short-run after April 2003 where we detected a structural break in the model. To examine the forecasting performance, we use the random walk model (RWM) as a benchmark, and we also evaluate the forecasting power of crack spread futures against the crude oil futures. The results showed that (a) both the crack spread futures and the crude oil futures outperformed the RWM; and (b) the crack spread futures are almost as good as the crude oil futures in predicting the movements in spot oil markets. (author)

  10. Epidemic spreading in weighted networks: an edge-based mean-field solution.

    Science.gov (United States)

    Yang, Zimo; Zhou, Tao

    2012-05-01

    Weight distribution greatly impacts the epidemic spreading taking place on top of networks. This paper presents a study of a susceptible-infected-susceptible model on regular random networks with different kinds of weight distributions. Simulation results show that the more homogeneous weight distribution leads to higher epidemic prevalence, which, unfortunately, could not be captured by the traditional mean-field approximation. This paper gives an edge-based mean-field solution for general weight distribution, which can quantitatively reproduce the simulation results. This method could be applied to characterize the nonequilibrium steady states of dynamical processes on weighted networks.

  11. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    Benyoucef M

    2007-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires floating point operations (flops, where is the processing gain and is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  12. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    M. Benyoucef

    2008-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires 2NK2 floating point operations (flops, where N is the processing gain and K is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  13. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  14. Burnup calculation code system COMRAD96

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, `Cross Section Treatment`, `Generation and Depletion Calculation`, and `Post Process`. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the {gamma} Spectrum on a terminal. This report is the general description and user`s manual of COMRAD96. (author)

  15. Shape-based hand recognition approach using the morphological pattern spectrum

    Science.gov (United States)

    Ramirez-Cortes, Juan Manuel; Gomez-Gil, Pilar; Sanchez-Perez, Gabriel; Prieto-Castro, Cesar

    2009-01-01

    We propose the use of the morphological pattern spectrum, or pecstrum, as the base of a biometric shape-based hand recognition system. The system receives an image of the right hand of a subject in an unconstrained pose, which is captured with a commercial flatbed scanner. According to pecstrum property of invariance to translation and rotation, the system does not require the use of pegs for a fixed hand position, which simplifies the image acquisition process. This novel feature-extraction method is tested using a Euclidean distance classifier for identification and verification cases, obtaining 97% correct identification, and an equal error rate (EER) of 0.0285 (2.85%) for the verification mode. The obtained results indicate that the pattern spectrum represents a good feature-extraction alternative for low- and medium-level hand-shape-based biometric applications.

  16. Short-Block Protograph-Based LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  17. Trellis-coded CPM for satellite-based mobile communications

    Science.gov (United States)

    Abrishamkar, Farrokh; Biglieri, Ezio

    1988-01-01

    Digital transmission for satellite-based land mobile communications is discussed. To satisfy the power and bandwidth limitations imposed on such systems, a combination of trellis coding and continuous-phase modulated signals are considered. Some schemes based on this idea are presented, and their performance is analyzed by computer simulation. The results obtained show that a scheme based on directional detection and Viterbi decoding appears promising for practical applications.

  18. Determination of neutron energy spectrum at a pneumatic rabbit station of a typical swimming pool type material test research reactor

    International Nuclear Information System (INIS)

    Malkawi, S.R.; Ahmad, N.

    2002-01-01

    The method of multiple foil activation was used to measure the neutron energy spectrum, experimentally, at a rabbit station of Pakistan Research Reactor-1 (PARR-1), which is a typical swimming pool type material test research reactor. The computer codes MSITER and SANDBP were used to adjust the spectrum. The pre-information required by the adjustment codes was obtained by modelling the core and its surroundings in three-dimensions by using the one dimensional transport theory code WIMS-D/4 and the multidimensional finite difference diffusion theory code CITATION. The input spectrum covariance information required by MSITER code was also calculated from the CITATION output. A comparison between calculated and adjusted spectra shows a good agreement

  19. Direct RNA-based detection and differentiation of CTX-M-type extended-spectrum β-lactamases (ESBL.

    Directory of Open Access Journals (Sweden)

    Claudia Stein

    Full Text Available The current global spread of multi-resistant Gram-negatives, particularly extended spectrum β-lactamases expressing bacteria, increases the likelihood of inappropriate empiric treatment of critically ill patients with subsequently increased mortality. From a clinical perspective, fast detection of resistant pathogens would allow a pre-emptive correction of an initially inappropriate treatment. Here we present diagnostic amplification-sequencing approach as proof of principal based on the fast molecular detection and correct discrimination of CTX-M-β-lactamases, the most frequent ESBL family. The workflow consists of the isolation of total mRNA and CTX-M-specific reverse transcription (RT, amplification and pyrosequencing. Due to the high variability of the CTX-M-β-lactamase-genes, degenerated primers for RT, qRT as well as for pyrosequencing, were used and the suitability and discriminatory performance of two conserved positions within the CTX-M genes were analyzed, using one protocol for all isolates and positions, respectively. Using this approach, no information regarding the expected CTX-M variant is needed since all sequences are covered by these degenerated primers. The presented workflow can be conducted within eight hours and has the potential to be expanded to other β-lactamase families.

  20. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    Science.gov (United States)

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  1. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    Directory of Open Access Journals (Sweden)

    Yueying Wu

    Full Text Available High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI extraction using the high efficiency video coding (H.265/HEVC standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0. The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  2. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  3. Sensory evaluation of commercial fat spreads based on oilseeds and walnut

    OpenAIRE

    Dimić, Etelka B.; Vujasinović, Vesna B.; Radočaj, Olga F.; Borić, Bojan D.

    2013-01-01

    The main focus of this study was on the sensory evaluation of commercial oilseeds spreads, as the most significant characteristic of this type of product from the consumers’ point of view. Sensory analysis was conducted by five experts using a quantitative descriptive and sensory profile test, applying a scoring method according to the standard procedure. Five different spreads were evaluated: sunflower, pumpkin, sesame, peanut, and walnut. Oil content and ...

  4. V.S.O.P. (99/05) computer code system

    International Nuclear Information System (INIS)

    Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.

    2005-11-01

    V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code (∼65000 Fortran statements). (orig.)

  5. V.S.O.P. (99/05) computer code system

    Energy Technology Data Exchange (ETDEWEB)

    Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.

    2005-11-01

    V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code ({approx}65000 Fortran statements). (orig.)

  6. Computer code FIT

    International Nuclear Information System (INIS)

    Rohmann, D.; Koehler, T.

    1987-02-01

    This is a description of the computer code FIT, written in FORTRAN-77 for a PDP 11/34. FIT is an interactive program to decude position, width and intensity of lines of X-ray spectra (max. length of 4K channels). The lines (max. 30 lines per fit) may have Gauss- or Voigt-profile, as well as exponential tails. Spectrum and fit can be displayed on a Tektronix terminal. (orig.) [de

  7. Transient Safety Analysis of Fast Spectrum TRU Burning LWRs with Internal Blankets

    Energy Technology Data Exchange (ETDEWEB)

    Downar, Thomas [Univ. of Michigan, Ann Arbor, MI (United States); Zazimi, Mujid [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Hill, Bob [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-31

    The objective of this proposal was to perform a detailed transient safety analysis of the Resource-Renewable BWR (RBWR) core designs using the U.S. NRC TRACE/PARCS code system. This project involved the same joint team that has performed the RBWR design evaluation for EPRI and therefore be able to leverage that previous work. And because of their extensive experience with fast spectrum reactors and parfait core designs, ANL was also part the project team. The principal outcome of this project was the development of a state-of-the-art transient analysis capability for GEN-IV reactors based on Monte Carlo generated cross sections and the US NRC coupled code system TRACE/PARCS, and a state-of-the-art coupled code assessment of the transient safety performance of the RBWR.

  8. The PHREEQE Geochemical equilibrium code data base and calculations

    International Nuclear Information System (INIS)

    Andersoon, K.

    1987-01-01

    Compilation of a thermodynamic data base for actinides and fission products for use with PHREEQE has begun and a preliminary set of actinide data has been tested for the PHREEQE code in a version run on an IBM XT computer. The work until now has shown that the PHREEQE code mostly gives satisfying results for specification of actinides in natural water environment. For U and Np under oxidizing conditions, however, the code has difficulties to converge with pH and Eh conserved when a solubility limit is applied. For further calculations of actinide and fission product specification and solubility in a waste repository and in the surrounding geosphere, more data are needed. It is necessary to evaluate the influence of the large uncertainties of some data. A quality assurance and a check on the consistency of the data base is also needed. Further work with data bases should include: an extension to fission products, an extension to engineering materials, an extension to other ligands than hydroxide and carbonate, inclusion of more mineral phases, inclusion of enthalpy data, a control of primary references in order to decide if values from different compilations are taken from the same primary reference and contacts and discussions with other groups, working with actinide data bases, e.g. at the OECD/NEA and at the IAEA. (author)

  9. Post-Tanner spreading of nematic droplets

    International Nuclear Information System (INIS)

    Mechkov, S; Oshanin, G; Cazabat, A M

    2009-01-01

    The quasistationary spreading of a circular liquid drop on a solid substrate typically obeys the so-called Tanner law, with the instantaneous base radius R(t) growing with time as R∼t 1/10 -an effect of the dominant role of capillary forces for a small-sized droplet. However, for droplets of nematic liquid crystals, a faster spreading law sets in at long times, so that R∼t α with α significantly larger than the Tanner exponent 1/10. In the framework of the thin film model (or lubrication approximation), we describe this 'acceleration' as a transition to a qualitatively different spreading regime driven by a strong substrate-liquid interaction specific to nematics (antagonistic anchoring at the interfaces). The numerical solution of the thin film equation agrees well with the available experimental data for nematics, even though the non-Newtonian rheology has yet to be taken into account. Thus we complement the theory of spreading with a post-Tanner stage, noting that the spreading process can be expected to cross over from the usual capillarity-dominated stage to a regime where the whole reservoir becomes a diffusive film in the sense of Derjaguin.

  10. A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing

    Directory of Open Access Journals (Sweden)

    Cunbo Lu

    2015-08-01

    Full Text Available It is significant to reduce packet jitter for real-time applications in a wireless network. Existing coding-aware routing algorithms use the opportunistic network coding (ONC scheme in a packet coding algorithm. The ONC scheme never delays packets to wait for the arrival of a future coding opportunity. The loss of some potential coding opportunities may degrade the contribution of network coding to jitter performance. In addition, most of the existing coding-aware routing algorithms assume that all flows participating in the network have equal rate. This is unrealistic, since multi-rate environments often appear. To overcome the above problem and expand coding-aware routing to multi-rate scenarios, from the view of data transmission, we present a low-jitter wireless transmission algorithm based on buffer management (BLJCAR, which decides packets in coding node according to the queue-length based threshold policy instead of the regular ONC policy as used in existing coding-aware routing algorithms. BLJCAR is a unified framework to merge the single rate case and multiple rate case. Simulations results show that the BLJCAR algorithm embedded in coding-aware routing outperforms the traditional ONC policy in terms of jitter, packet delivery delay, packet loss ratio and network throughput in network congestion in any traffic rates.

  11. A ground-based optical transmission spectrum of WASP-6b

    International Nuclear Information System (INIS)

    Jordán, Andrés; Espinoza, Néstor; Rabus, Markus; Eyheramendy, Susana; Sing, David K.; Désert, Jean-Michel; Bakos, Gáspár Á.; Fortney, Jonathan J.; López-Morales, Mercedes; Szentgyorgyi, Andrew; Maxted, Pierre F. L.; Triaud, Amaury H. M. J.

    2013-01-01

    We present a ground-based optical transmission spectrum of the inflated sub-Jupiter-mass planet WASP-6b. The spectrum was measured in 20 spectral channels from 480 nm to 860 nm using a series of 91 spectra over a complete transit event. The observations were carried out using multi-object differential spectrophotometry with the Inamori-Magellan Areal Camera and Spectrograph on the Baade Telescope at Las Campanas Observatory. We model systematic effects on the observed light curves using principal component analysis on the comparison stars and allow for the presence of short and long memory correlation structure in our Monte Carlo Markov Chain analysis of the transit light curves for WASP-6. The measured transmission spectrum presents a general trend of decreasing apparent planetary size with wavelength and lacks evidence for broad spectral features of Na and K predicted by clear atmosphere models. The spectrum is consistent with that expected for scattering that is more efficient in the blue, as could be caused by hazes or condensates in the atmosphere of WASP-6b. WASP-6b therefore appears to be yet another massive exoplanet with evidence for a mostly featureless transmission spectrum, underscoring the importance that hazes and condensates can have in determining the transmission spectra of exoplanets.

  12. Comparison study of time history and response spectrum responses for multiply supported piping systems

    International Nuclear Information System (INIS)

    Wang, Y.K.; Subudhi, M.; Bezler, P.

    1983-01-01

    In the past decade, several investigators have studied the problem of independent support excitation of a multiply supported piping system to identify the real need for such an analysis. This approach offers an increase in accuracy at a small increase in computational costs. To assess the method, studies based on the response spectrum approach using independent support motions for each group of commonly connected supports were performed. The results obtained from this approach were compared with the conventional envelope spectrum and time history solutions. The present study includes a mathematical formulation of the independent support motion analysis method suitable for implementation into an existing all purpose piping code PSAFE2 and a comparison of the solutions for some typical piping system using both Time History and Response Spectrum Methods. The results obtained from the Response Spectrum Methods represent the upper bound solution at most points in the piping system. Similarly, the Seismic Anchor Movement analysis based on the SRP method over predicts the responses near the support points and under predicts at points away from the supports

  13. Diagnosis-based and external cause-based criteria to identify adverse drug reactions in hospital ICD-coded data: application to an Australia population-based study

    Directory of Open Access Journals (Sweden)

    Wei Du

    2017-04-01

    Full Text Available Objectives: External cause International Classification of Diseases (ICD codes are commonly used to ascertain adverse drug reactions (ADRs related to hospitalisation. We quantified ascertainment of ADR-related hospitalisation using external cause codes and additional ICD-based hospital diagnosis codes. Methods: We reviewed the scientific literature to identify different ICD-based criteria for ADR-related hospitalisations, developed algorithms to capture ADRs based on candidate hospital ICD-10 diagnoses and external cause codes (Y40–Y59, and incorporated previously published causality ratings estimating the probability that a specific diagnosis was ADR related. We applied the algorithms to the NSW Admitted Patient Data Collection records of 45 and Up Study participants (2011–2013. Results: Of 493 442 hospitalisations among 267 153 study participants during 2011–2013, 18.8% (n = 92 953 had hospital diagnosis codes that were potentially ADR related; 1.1% (n = 5305 had high/very high–probability ADR-related diagnosis codes (causality ratings: A1 and A2; and 2.0% (n = 10 039 had ADR-related external cause codes. Overall, 2.2% (n = 11 082 of cases were classified as including an ADR-based hospitalisation on either external cause codes or high/very high–probability ADR-related diagnosis codes. Hence, adding high/very high–probability ADR-related hospitalisation codes to standard external cause codes alone (Y40–Y59 increased the number of hospitalisations classified as having an ADR-related diagnosis by 10.4%. Only 6.7% of cases with high-probability ADR-related mental symptoms were captured by external cause codes. Conclusion: Selective use of high-probability ADR-related hospital diagnosis codes in addition to external cause codes yielded a modest increase in hospitalised ADR incidence, which is of potential clinical significance. Clinically validated combinations of diagnosis codes could potentially further enhance capture.

  14. LSB-Based Steganography Using Reflected Gray Code

    Science.gov (United States)

    Chen, Chang-Chu; Chang, Chin-Chen

    Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.

  15. An Overview of the XGAM Code and Related Software for Gamma-ray Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Younes, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-11-13

    The XGAM spectrum-fitting code and associated software were developed specifically to analyze the complex gamma-ray spectra that can result from neutron-induced reactions. The XGAM code is designed to fit a spectrum over the entire available gamma-ray energy range as a single entity, in contrast to the more traditional piecewise approaches. This global-fit philosophy enforces background continuity as well as consistency between local and global behavior throughout the spectrum, and in a natural way. This report presents XGAM and the suite of programs built around it with an emphasis on how they fit into an overall analysis methodology for complex gamma-ray data. An application to the analysis of time-dependent delayed gamma-ray yields from 235U fission is shown in order to showcase the codes and how they interact.

  16. Fast Blood Vector Velocity Imaging: Simulations and Preliminary In Vivo Results

    DEFF Research Database (Denmark)

    Udesen, Jesper; Gran, Fredrik; Hansen, Kristoffer Lindskov

    2007-01-01

    for each pulse emission. 2) The transmitted pulse consists of a 13 bit Barker code which is transmitted simultaneously from each transducer element. 3) The 2-D vector velocity of the blood is found using 2-D speckle tracking between segments in consecutive speckle images. III Results: The method was tested...

  17. Epidemic spreading in a hierarchical social network.

    Science.gov (United States)

    Grabowski, A; Kosiński, R A

    2004-09-01

    A model of epidemic spreading in a population with a hierarchical structure of interpersonal interactions is described and investigated numerically. The structure of interpersonal connections is based on a scale-free network. Spatial localization of individuals belonging to different social groups, and the mobility of a contemporary community, as well as the effectiveness of different interpersonal interactions, are taken into account. Typical relations characterizing the spreading process, like a range of epidemic and epidemic curves, are discussed. The influence of preventive vaccinations on the spreading process is investigated. The critical value of preventively vaccinated individuals that is sufficient for the suppression of an epidemic is calculated. Our results are compared with solutions of the master equation for the spreading process and good agreement of the character of this process is found.

  18. Sheldon spectrum and the plankton paradox: two sides of the same coin-a trait-based plankton size-spectrum model.

    Science.gov (United States)

    Cuesta, José A; Delius, Gustav W; Law, Richard

    2018-01-01

    The Sheldon spectrum describes a remarkable regularity in aquatic ecosystems: the biomass density as a function of logarithmic body mass is approximately constant over many orders of magnitude. While size-spectrum models have explained this phenomenon for assemblages of multicellular organisms, this paper introduces a species-resolved size-spectrum model to explain the phenomenon in unicellular plankton. A Sheldon spectrum spanning the cell-size range of unicellular plankton necessarily consists of a large number of coexisting species covering a wide range of characteristic sizes. The coexistence of many phytoplankton species feeding on a small number of resources is known as the Paradox of the Plankton. Our model resolves the paradox by showing that coexistence is facilitated by the allometric scaling of four physiological rates. Two of the allometries have empirical support, the remaining two emerge from predator-prey interactions exactly when the abundances follow a Sheldon spectrum. Our plankton model is a scale-invariant trait-based size-spectrum model: it describes the abundance of phyto- and zooplankton cells as a function of both size and species trait (the maximal size before cell division). It incorporates growth due to resource consumption and predation on smaller cells, death due to predation, and a flexible cell division process. We give analytic solutions at steady state for both the within-species size distributions and the relative abundances across species.

  19. COMBINE7.1 - A Portable ENDF/B-VII.0 Based Neutron Spectrum and Cross-Section Generation Program

    Energy Technology Data Exchange (ETDEWEB)

    Woo Y. Yoon; David W. Nigg

    2009-08-01

    COMBINE7.1 is a FORTRAN 90 computer code that generates multigroup neutron constants for use in the deterministic diffusion and transport theory neutronics analysis. The cross-section database used by COMBINE7.1 is derived from the Evaluated Nuclear Data Files (ENDF/B-VII.0). The neutron energy range covered is from 20 MeV to 1.0E-5 eV. The Los Alamos National Laboratory NJOY code is used as the processing code to generate a 167 fine-group cross-section library in MATXS format for Bondarenko self-shielding treatment. Resolved resonance parameters are extracted from ENDF/B-VII.0 File 2 for a separate library to be used in an alternate Nordheim self-shielding treatment in the resolved resonance energy range. The equations solved for energy dependent neutron spectrum in the 167 fine-group structure are the B-3 or B-1 approximations to the transport equation. The fine group cross sections needed for the spectrum calculation are first prepared by Bondarenko self-shielding interpolation in terms of background cross section and temperature. The geometric lump effect, when present, is accounted for by augmenting the background cross section. Nordheim self-shielded fine group cross sections for a material having resolved resonance parameters overwrite correspondingly the existing self-shielded fine group cross sections when this option is used. The fine group cross sections in the thermal energy range are replaced by those self-shielded with the Amouyal/Benoist/Horowitz method in the three region geometry when this option is requested. COMBINE7.1 coalesces fine group cross sections into broad group macroscopic and microscopic constants. The coalescing is performed by utilizing fine-group fluxes and/or currents obtained by spectrum calculation as the weighting functions. The multigroup constant may be output in any of several standard formats including ANISN 14** free format, CCCC ISOTXS format, and AMPX working library format. ANISN-PC, a one-dimensional, discrete

  20. COMBINE7.1 - A Portable ENDF/B-VII.0 Based Neutron Spectrum and Cross-Section Generation Program

    International Nuclear Information System (INIS)

    Yoon, Woo Y.; Nigg, David W.

    2009-01-01

    COMBINE7.1 is a FORTRAN 90 computer code that generates multigroup neutron constants for use in the deterministic diffusion and transport theory neutronics analysis. The cross-section database used by COMBINE7.1 is derived from the Evaluated Nuclear Data Files (ENDF/B-VII.0). The neutron energy range covered is from 20 MeV to 1.0E-5 eV. The Los Alamos National Laboratory NJOY code is used as the processing code to generate a 167 fine-group cross-section library in MATXS format for Bondarenko self-shielding treatment. Resolved resonance parameters are extracted from ENDF/B-VII.0 File 2 for a separate library to be used in an alternate Nordheim self-shielding treatment in the resolved resonance energy range. The equations solved for energy dependent neutron spectrum in the 167 fine-group structure are the B-3 or B-1 approximations to the transport equation. The fine group cross sections needed for the spectrum calculation are first prepared by Bondarenko self-shielding interpolation in terms of background cross section and temperature. The geometric lump effect, when present, is accounted for by augmenting the background cross section. Nordheim self-shielded fine group cross sections for a material having resolved resonance parameters overwrite correspondingly the existing self-shielded fine group cross sections when this option is used. The fine group cross sections in the thermal energy range are replaced by those self-shielded with the Amouyal/Benoist/Horowitz method in the three region geometry when this option is requested. COMBINE7.1 coalesces fine group cross sections into broad group macroscopic and microscopic constants. The coalescing is performed by utilizing fine-group fluxes and/or currents obtained by spectrum calculation as the weighting functions. The multigroup constant may be output in any of several standard formats including ANISN 14** free format, CCCC ISOTXS format, and AMPX working library format. ANISN-PC, a one-dimensional, discrete

  1. Delayed P100-Like Latencies in Multiple Sclerosis: A Preliminary Investigation Using Visual Evoked Spread Spectrum Analysis

    Science.gov (United States)

    Kiiski, Hanni S. M.; Ní Riada, Sinéad; Lalor, Edmund C.; Gonçalves, Nuno R.; Nolan, Hugh; Whelan, Robert; Lonergan, Róisín; Kelly, Siobhán; O'Brien, Marie Claire; Kinsella, Katie; Bramham, Jessica; Burke, Teresa; Ó Donnchadha, Seán; Hutchinson, Michael; Tubridy, Niall; Reilly, Richard B.

    2016-01-01

    Conduction along the optic nerve is often slowed in multiple sclerosis (MS). This is typically assessed by measuring the latency of the P100 component of the Visual Evoked Potential (VEP) using electroencephalography. The Visual Evoked Spread Spectrum Analysis (VESPA) method, which involves modulating the contrast of a continuous visual stimulus over time, can produce a visually evoked response analogous to the P100 but with a higher signal-to-noise ratio and potentially higher sensitivity to individual differences in comparison to the VEP. The main objective of the study was to conduct a preliminary investigation into the utility of the VESPA method for probing and monitoring visual dysfunction in multiple sclerosis. The latencies and amplitudes of the P100-like VESPA component were compared between healthy controls and multiple sclerosis patients, and multiple sclerosis subgroups. The P100-like VESPA component activations were examined at baseline and over a 3-year period. The study included 43 multiple sclerosis patients (23 relapsing-remitting MS, 20 secondary-progressive MS) and 42 healthy controls who completed the VESPA at baseline. The follow-up sessions were conducted 12 months after baseline with 24 MS patients (15 relapsing-remitting MS, 9 secondary-progressive MS) and 23 controls, and again at 24 months post-baseline with 19 MS patients (13 relapsing-remitting MS, 6 secondary-progressive MS) and 14 controls. The results showed P100-like VESPA latencies to be delayed in multiple sclerosis compared to healthy controls over the 24-month period. Secondary-progressive MS patients had most pronounced delay in P100-like VESPA latency relative to relapsing-remitting MS and controls. There were no longitudinal P100-like VESPA response differences. These findings suggest that the VESPA method is a reproducible electrophysiological method that may have potential utility in the assessment of visual dysfunction in multiple sclerosis. PMID:26726800

  2. Delayed P100-Like Latencies in Multiple Sclerosis: A Preliminary Investigation Using Visual Evoked Spread Spectrum Analysis.

    Directory of Open Access Journals (Sweden)

    Hanni S M Kiiski

    Full Text Available Conduction along the optic nerve is often slowed in multiple sclerosis (MS. This is typically assessed by measuring the latency of the P100 component of the Visual Evoked Potential (VEP using electroencephalography. The Visual Evoked Spread Spectrum Analysis (VESPA method, which involves modulating the contrast of a continuous visual stimulus over time, can produce a visually evoked response analogous to the P100 but with a higher signal-to-noise ratio and potentially higher sensitivity to individual differences in comparison to the VEP. The main objective of the study was to conduct a preliminary investigation into the utility of the VESPA method for probing and monitoring visual dysfunction in multiple sclerosis. The latencies and amplitudes of the P100-like VESPA component were compared between healthy controls and multiple sclerosis patients, and multiple sclerosis subgroups. The P100-like VESPA component activations were examined at baseline and over a 3-year period. The study included 43 multiple sclerosis patients (23 relapsing-remitting MS, 20 secondary-progressive MS and 42 healthy controls who completed the VESPA at baseline. The follow-up sessions were conducted 12 months after baseline with 24 MS patients (15 relapsing-remitting MS, 9 secondary-progressive MS and 23 controls, and again at 24 months post-baseline with 19 MS patients (13 relapsing-remitting MS, 6 secondary-progressive MS and 14 controls. The results showed P100-like VESPA latencies to be delayed in multiple sclerosis compared to healthy controls over the 24-month period. Secondary-progressive MS patients had most pronounced delay in P100-like VESPA latency relative to relapsing-remitting MS and controls. There were no longitudinal P100-like VESPA response differences. These findings suggest that the VESPA method is a reproducible electrophysiological method that may have potential utility in the assessment of visual dysfunction in multiple sclerosis.

  3. Performance analysis of multiple interference suppression over asynchronous/synchronous optical code-division multiple-access system based on complementary/prime/shifted coding scheme

    Science.gov (United States)

    Nieh, Ta-Chun; Yang, Chao-Chin; Huang, Jen-Fa

    2011-08-01

    A complete complementary/prime/shifted prime (CPS) code family for the optical code-division multiple-access (OCDMA) system is proposed. Based on the ability of complete complementary (CC) code, the multiple-access interference (MAI) can be suppressed and eliminated via spectral amplitude coding (SAC) OCDMA system under asynchronous/synchronous transmission. By utilizing the shifted prime (SP) code in the SAC scheme, the hardware implementation of encoder/decoder can be simplified with a reduced number of optical components, such as arrayed waveguide grating (AWG) and fiber Bragg grating (FBG). This system has a superior performance as compared to previous bipolar-bipolar coding OCDMA systems.

  4. Predicting online ratings based on the opinion spreading process

    Science.gov (United States)

    He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo

    2015-10-01

    Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.

  5. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  6. GPU-accelerated 3D neutron diffusion code based on finite difference method

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Q.; Yu, G.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ. (China)

    2012-07-01

    Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)

  7. GPU-accelerated 3D neutron diffusion code based on finite difference method

    International Nuclear Information System (INIS)

    Xu, Q.; Yu, G.; Wang, K.

    2012-01-01

    Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)

  8. Oil Price Forecasting Using Crack Spread Futures and Oil Exchange Traded Funds

    Directory of Open Access Journals (Sweden)

    Hankyeung Choi

    2015-04-01

    Full Text Available Given the emerging consensus from previous studies that crude oil and refined product (as well as crack spread prices are cointegrated, this study examines the link between the crude oil spot and crack spread derivatives markets. Specifically, the usefulness of the two crack spread derivatives products (namely, crack spread futures and the ETF crack spread for modeling and forecasting daily OPEC crude oil spot prices is evaluated. Based on the results of a structural break test, the sample is divided into pre-crisis, crisis, and post-crisis periods. We find a unidirectional relationship from the two crack spread derivatives markets to the crude oil spot market during the post-crisis period. In terms of forecasting performance, the forecasting models based on crack spread futures and the ETF crack spread outperform the Random Walk Model (RWM, both in-sample and out-of-sample. In addition, on average, the results suggest that information from the ETF crack spread market contributes more to the forecasting models than information from the crack spread futures market.

  9. Beam-width spreading of vortex beams in free space

    Science.gov (United States)

    Wang, Weiwei; Li, Jinhong; Duan, Meiling

    2018-01-01

    Based on the extended Huygens-Fresnel principle and the definition of second-order moments of the Wigner distribution function, the analytical expression for the beam-width spreading of Gaussian Schell-model (GSM) vortex beams in free space are derived, and used to study the influence of beam parameters on the beam-width spreading of GSM vortex beams. With the increment of the propagation distance, the beam-width spreading of GSM vortex beams will increase; the bigger the topological charge, spatial correlation length, wavelength and waist width are, the smaller the beam-width spreading is.

  10. Mechanical conditions and modes of paraglacial deep-seated gravitational spreading in Valles Marineris, Mars

    Science.gov (United States)

    Makowska, Magdalena; Mège, Daniel; Gueydan, Frédéric; Chéry, Jean

    2016-09-01

    Deep-seated gravitational spreading (DSGS) affects the slopes of formerly glaciated mountain ridges. On Mars, DSGS has played a key role in shaping the landforms of the giant Valles Marineris troughs. Though less spectacular, DSGS is common in terrestrial orogens, where understanding its mechanics is critical in the light of the ongoing climate change because it is a potential source of catastrophic landslides in deglaciated valleys. We conducted parametric numerical studies in order to identify important factors responsible for DSGS initiation. DSGS models are computed using an elastoviscoplastic finite element code. Using ADELI's software, we reproduce topographic ridge spreading under the effect of valley unloading. Two types of spreading topographic ridges are investigated, homogeneous or with horizontal rheological layering. We find that gravitational instabilities are enhanced by high slopes, which increase gravitational stress, and low friction and cohesion, which decrease yield stress. In the unlayered ridge, instability is triggered by glacial unloading with plastic strain concentration inside the ridge and at the base of the high slopes. Vertical fractures develop in the upper part of the slope, potentially leading to fault scarps. Ridge homogeneity promotes a deformation mode controlled by uphill-facing normal faulting and basal bulging. In the second case, the ridge encompasses horizontal geological discontinuities that induce rock mass anisotropy. Discontinuity located at the base of the slope accumulates plastic strain, leading to the formation of a sliding plane evolving into a landslide. The presence of a weak layer at ridge base therefore promotes another slope deformation mode ending up with catastrophic failure. Mechanical conditions and slope height being equal, these conclusions can probably be extrapolated to Earth. Compared with Mars, DSGS on Earth is inhibited because terrestrial topographic gradients are lower than in Valles Marineris, an

  11. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  12. Implementation of the SAMPO computer code in the Cyber 170-750

    International Nuclear Information System (INIS)

    Chagas, E.F.; Liguori Neto, R.; Gomes, P.R.S.

    1985-01-01

    The code SAMPO, in this available version, incorporates algorithms that determine energy, eficiency and peak shape. The code also includes processing subroutines that provide automatic surveys of peaks raising all their characteristics. The handling of the code has been improved and its analysing capacity in each region of the spectrum has been amplified. Practical information regarding the use of the code is enclosed. Tests made guarantee the good performance of the code SAMPO in the Cyber system-IEAv. (Author) [pt

  13. Verification of structural analysis computer codes in nuclear engineering

    International Nuclear Information System (INIS)

    Zebeljan, Dj.; Cizelj, L.

    1990-01-01

    Sources of potential errors, which can take place during use of finite element method based computer programs, are described in the paper. The magnitude of errors was defined as acceptance criteria for those programs. Error sources are described as they are treated by 'National Agency for Finite Element Methods and Standards (NAFEMS)'. Specific verification examples are used from literature of Nuclear Regulatory Commission (NRC). Example of verification is made on PAFEC-FE computer code for seismic response analyses of piping systems by response spectrum method. (author)

  14. ESELEM 4: a code for calculating fine neutron spectrum and multi-group cross sections in plate lattice

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Katsuragi, Satoru; Narita, Hideo.

    1976-07-01

    The multi-group treatment has been used in the design study of fast reactors and analysis of experiments at fast critical assemblies. The accuracy of the multi-group cross sections therefore affects strongly the results of these analyses. The ESELEM 4 code has been developed to produce multi-group cross sections with an advanced method from the nuclear data libraries used in the JAERI Fast set. ESELEM 4 solves integral transport equation by the collision probability method in plate lattice geometry to obtain the fine neutron spectrum. A typical fine group mesh width is 0.008 in lethargy unit. The multi-group cross sections are calculated by weighting the point data with the fine structure neutron flux. Some devices are applied to reduce computation time and computer core storage required for the calculation. The slowing down sources are calculated with the use of a recurrence formula derived for elastic and inelastic scattering. The broad group treatment is adopted above 2 MeV for dealing with both light any heavy elements. Also the resonance cross sections of heavy elements are represented in a broad group structure, for which we use the values of the JAERI Fast set. The library data are prepared by the PRESM code from ENDF/A type nuclear data files. The cross section data can be compactly stored in the fast computer core memory for saving the core storage and data processing time. The programme uses the variable dimensions to increase its flexibility. The users' guide for ESELEM 4 and PRESM is also presented in this report. (auth.)

  15. Optimizing hybrid spreading in metapopulations.

    Science.gov (United States)

    Zhang, Changwang; Zhou, Shi; Miller, Joel C; Cox, Ingemar J; Chain, Benjamin M

    2015-04-29

    Epidemic spreading phenomena are ubiquitous in nature and society. Examples include the spreading of diseases, information, and computer viruses. Epidemics can spread by local spreading, where infected nodes can only infect a limited set of direct target nodes and global spreading, where an infected node can infect every other node. In reality, many epidemics spread using a hybrid mixture of both types of spreading. In this study we develop a theoretical framework for studying hybrid epidemics, and examine the optimum balance between spreading mechanisms in terms of achieving the maximum outbreak size. We show the existence of critically hybrid epidemics where neither spreading mechanism alone can cause a noticeable spread but a combination of the two spreading mechanisms would produce an enormous outbreak. Our results provide new strategies for maximising beneficial epidemics and estimating the worst outcome of damaging hybrid epidemics.

  16. Neutron fluence rate and energy spectrum in SPRR-300 reactor thermal column

    International Nuclear Information System (INIS)

    Dou Haifeng; Dai Junlong

    2006-01-01

    In order to modify the simple one-dimension model, the neutron fluence rate distribution calculated with ANISN code ws checked with that calculated with MCNP code. To modify the error caused by ignoring the neutron landscape orientation leaking, the reflector that can't be modeled in a simple one-dimension model was dealt by extending landscape orientation scale. On this condition the neutron fluence rate distribution and the energy spectrum in the thermal column of SPRR-300 reactor were calculated with one-dimensional code ANISN, and the results of Cd ratio are well accorded with the experimental results. The deviation between them is less than 5% and it isn't above 10% in one or two special positions. It indicates that neutron fluence rate distribution and energy spectrum in the thermal column can be well calculated with one-dimensional code ANISN. (authors)

  17. Finger Vein Recognition Based on Local Directional Code

    Science.gov (United States)

    Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2012-01-01

    Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194

  18. Finger Vein Recognition Based on Local Directional Code

    Directory of Open Access Journals (Sweden)

    Rongyang Xiao

    2012-11-01

    Full Text Available Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP, Local Derivative Pattern (LDP and Local Line Binary Pattern (LLBP. However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD, this paper represents a new direction based local descriptor called Local Directional Code (LDC and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.

  19. Scintillator Based Coded-Aperture Imaging for Neutron Detection

    International Nuclear Information System (INIS)

    Hayes, Sean-C.; Gamage, Kelum-A-A.

    2013-06-01

    In this paper we are going to assess the variations of neutron images using a series of Monte Carlo simulations. We are going to study neutron images of the same neutron source with different source locations, using a scintillator based coded-aperture system. The Monte Carlo simulations have been conducted making use of the EJ-426 neutron scintillator detector. This type of detector has a low sensitivity to gamma rays and is therefore of particular use in a system with a source that emits a mixed radiation field. From the use of different source locations, several neutron images have been produced, compared both qualitatively and quantitatively for each case. This allows conclusions to be drawn on how suited the scintillator based coded-aperture neutron imaging system is to detecting various neutron source locations. This type of neutron imaging system can be easily used to identify and locate nuclear materials precisely. (authors)

  20. Stability and dynamic rheological characterization of spread developed based on pistachio oil.

    Science.gov (United States)

    Mousazadeh, Morad; Mousavi, Seyed Mohammad; Emam-Djomeh, Zahra; HadiNezhad, Mehri; Rahmati, Naghmeh

    2013-05-01

    This study investigated the influence of formulation variables (pistachio oil (PO, 7.5 and 15%, w/w), Cocoa butter (CB, 7.5 and 15%, w/w), xanthan gum (XG, 0 and 0.3%, w/w), and distillated monoglyceride (DMG, 0.5 and 1%, w/w)) on the rheological properties and emulsion stability of spreads. Power law and Herschel-Bulkley models were used for modeling shear-thinning behavior of samples. The power law model was found to describe the flow behavior of spreads better than Herschel-Bulkley model. All the rheological properties were increased by adding XG to the spreads whereas increasing PO content caused to decrease them. The DMG had positive effect on apparent viscosity and elastic behavior but had negative effect on viscose behavior. Apparent viscosity was increased by adding CB while rheological modules were not significantly (p DMG improved stability of emulsion. The best spread formulation with optimum rheological properties was 15% PO, 7.5% CB, 0.3% XG and 1% DMG. Copyright © 2013 Elsevier B.V. All rights reserved.