WorldWideScience

Sample records for source code distribution

  1. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  2. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  3. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  4. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  5. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  6. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  7. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  8. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  9. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  10. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  11. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  12. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  13. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  14. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  15. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  16. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  17. Four energy group neutron flux distribution in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION code

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2009-01-01

    A 3-D (R, θ , Z) neutronic model for the Miniature Neutron Source Reactor (MNSR) was developed earlier to conduct the reactor neutronic analysis. The group constants for all the reactor components were generated using the WIMSD4 code. The reactor excess reactivity and the four group neutron flux distributions were calculated using the CITATION code. This model is used in this paper to calculate the point wise four energy group neutron flux distributions in the MNSR versus the radius, angle and reactor axial directions. Good agreement is noticed between the measured and the calculated thermal neutron flux in the inner and the outer irradiation site with relative difference less than 7% and 5% respectively. (author)

  18. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  19. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  20. Balanced distributed coding of omnidirectional images

    Science.gov (United States)

    Thirumalai, Vijayaraghavan; Tosic, Ivana; Frossard, Pascal

    2008-01-01

    This paper presents a distributed coding scheme for the representation of 3D scenes captured by stereo omni-directional cameras. We consider a scenario where images captured from two different viewpoints are encoded independently, with a balanced rate distribution among the different cameras. The distributed coding is built on multiresolution representation and partitioning of the visual information in each camera. The encoder transmits one partition after entropy coding, as well as the syndrome bits resulting from the channel encoding of the other partition. The decoder exploits the intra-view correlation and attempts to reconstruct the source image by combination of the entropy-coded partition and the syndrome information. At the same time, it exploits the inter-view correlation using motion estimation between images from different cameras. Experiments demonstrate that the distributed coding solution performs better than a scheme where images are handled independently, and that the coding rate stays balanced between encoders.

  1. Distributed space-time coding

    CERN Document Server

    Jing, Yindi

    2014-01-01

    Distributed Space-Time Coding (DSTC) is a cooperative relaying scheme that enables high reliability in wireless networks. This brief presents the basic concept of DSTC, its achievable performance, generalizations, code design, and differential use. Recent results on training design and channel estimation for DSTC and the performance of training-based DSTC are also discussed.

  2. UNIX code management and distribution

    International Nuclear Information System (INIS)

    Hung, T.; Kunz, P.F.

    1992-09-01

    We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process

  3. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  4. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  5. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  6. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  7. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    International Nuclear Information System (INIS)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I.

    2006-01-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR 2 + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six 60 Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  8. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    Energy Technology Data Exchange (ETDEWEB)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I. [Instituto Tecnologico e Nuclear, Dpto. Proteccao Radiologica e Seguranca Nuclear, Sacavem (Portugal)

    2006-07-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR{sup 2} + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six {sup 60}Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  9. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  10. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  11. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...

  12. Improved side information generation for distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2008-01-01

    As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side...... information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm...

  13. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy Tran...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  14. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  15. On Network Coded Distributed Storage

    DEFF Research Database (Denmark)

    Cabrera Guerrero, Juan Alberto; Roetter, Daniel Enrique Lucani; Fitzek, Frank Hanns Paul

    2016-01-01

    systems typically rely on expensive infrastructure with centralized control to store, repair and access the data. This approach introduces a large delay for accessing and storing the data driven in part by a high RTT between users and the cloud. These characteristics are at odds with the massive increase......This paper focuses on distributed fog storage solutions, where a number of unreliable devices organize themselves in Peer-to-Peer (P2P) networks with the purpose to store reliably their data and that of other devices and/or local users and provide lower delay and higher throughput. Cloud storage...... of devices and generated data in coming years as well as the requirements of low latency in many applications. We focus on characterizing optimal solutions for maintaining data availability when nodes in the fog continuously leave the network. In contrast with state-of-the-art data repair formulations, which...

  16. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    Science.gov (United States)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  17. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  18. Distributed Cloud Storage Using Network Coding

    OpenAIRE

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2014-01-01

    Distributed storage is usually considered within acloud provider to ensure availability and reliability of the data.However, the user is still directly dependent on the quality of asingle system. It is also entrusting the service provider with largeamounts of private data, which may be accessed by a successfulattack to that cloud system or even be inspected by governmentagencies in some countries. This paper advocates a generalframework for network coding enabled distributed storage overmulti...

  19. Code Forking, Governance, and Sustainability in Open Source Software

    OpenAIRE

    Juho Lindman; Linus Nyman

    2013-01-01

    The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...

  20. Applications guide to the RSIC-distributed version of the MCNP code (coupled Monte Carlo neutron-photon Code)

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1985-09-01

    An overview of the RSIC-distributed version of the MCNP code (a soupled Monte Carlo neutron-photon code) is presented. All general features of the code, from machine hardware requirements to theoretical details, are discussed. The current nuclide cross-section and other libraries available in the standard code package are specified, and a realistic example of the flexible geometry input is given. Standard and nonstandard source, estimator, and variance-reduction procedures are outlined. Examples of correct usage and possible misuse of certain code features are presented graphically and in standard output listings. Finally, itemized summaries of sample problems, various MCNP code documentation, and future work are given

  1. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...... with a robust fusion system able to improve the quality of the fused SI along the decoding process through a learning process using already decoded data. We shall here take the approach to fuse the estimated distributions of the SIs as opposed to a conventional fusion algorithm based on the fusion of pixel...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  2. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  3. Recent advances in multiview distributed video coding

    Science.gov (United States)

    Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj

    2007-04-01

    We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.

  4. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  5. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  6. Time coded distribution via broadcasting stations

    Science.gov (United States)

    Leschiutta, S.; Pettiti, V.; Detoma, E.

    1979-01-01

    The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.

  7. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  8. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  9. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  10. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Science.gov (United States)

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  11. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  12. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  13. Weight Distribution for Non-binary Cluster LDPC Code Ensemble

    Science.gov (United States)

    Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi

    In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.

  14. Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Raket, Lars Lau; Brites, Catarina

    2014-01-01

    Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and c...

  15. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  16. The Astrophysics Source Code Library by the numbers

    Science.gov (United States)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  17. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  18. Visualizing Debugging Activity in Source Code Repositories

    OpenAIRE

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    We present the use of the CVSgrab visualization tool for understanding the debugging activity in the Mozilla project. We show how to display the distribution of different bug types over the project structure, locate project components which undergo heavy debugging activity, and get insight in the bug evolution in time.

  19. Visualizing Debugging Activity in Source Code Repositories

    NARCIS (Netherlands)

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    We present the use of the CVSgrab visualization tool for understanding the debugging activity in the Mozilla project. We show how to display the distribution of different bug types over the project structure, locate project components which undergo heavy debugging activity, and get insight in the

  20. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  1. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  2. Locally Minimum Storage Regenerating Codes in Distributed Cloud Storage Systems

    Institute of Scientific and Technical Information of China (English)

    Jing Wang; Wei Luo; Wei Liang; Xiangyang Liu; Xiaodai Dong

    2017-01-01

    In distributed cloud storage sys-tems, inevitably there exist multiple node fail-ures at the same time. The existing methods of regenerating codes, including minimum storage regenerating (MSR) codes and mini-mum bandwidth regenerating (MBR) codes, are mainly to repair one single or several failed nodes, unable to meet the repair need of distributed cloud storage systems. In this paper, we present locally minimum storage re-generating (LMSR) codes to recover multiple failed nodes at the same time. Specifically, the nodes in distributed cloud storage systems are divided into multiple local groups, and in each local group (4, 2) or (5, 3) MSR codes are constructed. Moreover, the grouping method of storage nodes and the repairing process of failed nodes in local groups are studied. The-oretical analysis shows that LMSR codes can achieve the same storage overhead as MSR codes. Furthermore, we verify by means of simulation that, compared with MSR codes, LMSR codes can reduce the repair bandwidth and disk I/O overhead effectively.

  3. Multimedia distribution using network coding on the iphone platform

    DEFF Research Database (Denmark)

    Vingelmann, Peter; Pedersen, Morten Videbæk; Fitzek, Frank

    2010-01-01

    This paper looks into the implementation details of random linear network coding on the Apple iPhone and iPod Touch mobile platforms for multimedia distribution. Previous implementations of network coding on this platform failed to achieve a throughput which is sufficient to saturate the WLAN...

  4. Conflict free network coding for distributed storage networks

    KAUST Repository

    Al-Habob, Ahmed A.; Sorour, Sameh; Aboutorab, Neda; Sadeghi, Parastoo

    2015-01-01

    © 2015 IEEE. In this paper, we design a conflict free instantly decodable network coding (IDNC) solution for file download from distributed storage servers. Considering previously downloaded files at the clients from these servers as side

  5. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  6. Codon Distribution in Error-Detecting Circular Codes

    Directory of Open Access Journals (Sweden)

    Elena Fimmel

    2016-03-01

    Full Text Available In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising result, it is shown that the codons can be separated into very few classes (three, or five, or six with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes.

  7. Codon Distribution in Error-Detecting Circular Codes.

    Science.gov (United States)

    Fimmel, Elena; Strüngmann, Lutz

    2016-03-15

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick's hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C³ and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C(3) codes to maximal self-complementary circular codes.

  8. Automatic code generation for distributed robotic systems

    International Nuclear Information System (INIS)

    Jones, J.P.

    1993-01-01

    Hetero Helix is a software environment which supports relatively large robotic system development projects. The environment supports a heterogeneous set of message-passing LAN-connected common-bus multiprocessors, but the programming model seen by software developers is a simple shared memory. The conceptual simplicity of shared memory makes it an extremely attractive programming model, especially in large projects where coordinating a large number of people can itself become a significant source of complexity. We present results from three system development efforts conducted at Oak Ridge National Laboratory over the past several years. Each of these efforts used automatic software generation to create 10 to 20 percent of the system

  9. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  10. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  11. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  12. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  13. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  14. Panchromatic spectral energy distributions of Herschel sources

    Science.gov (United States)

    Berta, S.; Lutz, D.; Santini, P.; Wuyts, S.; Rosario, D.; Brisbin, D.; Cooray, A.; Franceschini, A.; Gruppioni, C.; Hatziminaoglou, E.; Hwang, H. S.; Le Floc'h, E.; Magnelli, B.; Nordon, R.; Oliver, S.; Page, M. J.; Popesso, P.; Pozzetti, L.; Pozzi, F.; Riguccini, L.; Rodighiero, G.; Roseboom, I.; Scott, D.; Symeonidis, M.; Valtchanov, I.; Viero, M.; Wang, L.

    2013-03-01

    Combining far-infrared Herschel photometry from the PACS Evolutionary Probe (PEP) and Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time programs with ancillary datasets in the GOODS-N, GOODS-S, and COSMOS fields, it is possible to sample the 8-500 μm spectral energy distributions (SEDs) of galaxies with at least 7-10 bands. Extending to the UV, optical, and near-infrared, the number of bands increases up to 43. We reproduce the distribution of galaxies in a carefully selected restframe ten colors space, based on this rich data-set, using a superposition of multivariate Gaussian modes. We use this model to classify galaxies and build median SEDs of each class, which are then fitted with a modified version of the magphys code that combines stellar light, emission from dust heated by stars and a possible warm dust contribution heated by an active galactic nucleus (AGN). The color distribution of galaxies in each of the considered fields can be well described with the combination of 6-9 classes, spanning a large range of far- to near-infrared luminosity ratios, as well as different strength of the AGN contribution to bolometric luminosities. The defined Gaussian grouping is used to identify rare or odd sources. The zoology of outliers includes Herschel-detected ellipticals, very blue z ~ 1 Ly-break galaxies, quiescent spirals, and torus-dominated AGN with star formation. Out of these groups and outliers, a new template library is assembled, consisting of 32 SEDs describing the intrinsic scatter in the restframe UV-to-submm colors of infrared galaxies. This library is tested against L(IR) estimates with and without Herschel data included, and compared to eightother popular methods often adopted in the literature. When implementing Herschel photometry, these approaches produce L(IR) values consistent with each other within a median absolute deviation of 10-20%, the scatter being dominated more by fine tuning of the codes, rather than by the choice of

  15. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  16. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  17. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  18. A code for obtaining temperature distribution by finite element method

    International Nuclear Information System (INIS)

    Bloch, M.

    1984-01-01

    The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt

  19. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  20. Automating RPM Creation from a Source Code Repository

    Science.gov (United States)

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  1. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  2. Re-estimation of Motion and Reconstruction for Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Raket, Lars Lau; Forchhammer, Søren

    2014-01-01

    Transform domain Wyner-Ziv (TDWZ) video coding is an efficient approach to distributed video coding (DVC), which provides low complexity encoding by exploiting the source statistics at the decoder side. The DVC coding efficiency depends mainly on side information and noise modeling. This paper...... proposes a motion re-estimation technique based on optical flow to improve side information and noise residual frames by taking partially decoded information into account. To improve noise modeling, a noise residual motion re-estimation technique is proposed. Residual motion compensation with motion...

  3. Protect Heterogeneous Environment Distributed Computing from Malicious Code Assignment

    Directory of Open Access Journals (Sweden)

    V. S. Gorbatov

    2011-09-01

    Full Text Available The paper describes the practical implementation of the protection system of heterogeneous environment distributed computing from malicious code for the assignment. A choice of technologies, development of data structures, performance evaluation of the implemented system security are conducted.

  4. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  5. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  6. The Astrophysics Source Code Library: Supporting software publication and citation

    Science.gov (United States)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  7. Distributed power sources for Mars colonization

    International Nuclear Information System (INIS)

    Miley, George H.; Shaban, Yasser

    2003-01-01

    One of the fundamental needs for Mars colonization is an abundant source of energy. The total energy system will probably use a mixture of sources based on solar energy, fuel cells, and nuclear energy. Here we concentrate on the possibility of developing a distributed system employing several unique new types of nuclear energy sources, specifically small fusion devices using inertial electrostatic confinement and portable 'battery type' proton reaction cells

  8. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  9. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  10. Source distribution dependent scatter correction for PVI

    International Nuclear Information System (INIS)

    Barney, J.S.; Harrop, R.; Dykstra, C.J.

    1993-01-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction

  11. The effect of energy distribution of external source on source multiplication in fast assemblies

    International Nuclear Information System (INIS)

    Karam, R.A.; Vakilian, M.

    1976-02-01

    The essence of this study is the effect of energy distribution of a source on the detection rate as a function of K effective in fast assemblies. This effectiveness, as a function of K was studied in a fission chamber, using the ABN cross-section set and Mach 1 code. It was found that with a source which has a fission spectrum, the reciprocal count rate versus mass relationship is linear down to K effective 0.59. For a thermal source, the linearity was never achieved. (author)

  12. Over-Distribution in Source Memory

    Science.gov (United States)

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  13. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  14. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  15. Health physics source document for codes of practice

    International Nuclear Information System (INIS)

    Pearson, G.W.; Meggitt, G.C.

    1989-05-01

    Personnel preparing codes of practice often require basic Health Physics information or advice relating to radiological protection problems and this document is written primarily to supply such information. Certain technical terms used in the text are explained in the extensive glossary. Due to the pace of change in the field of radiological protection it is difficult to produce an up-to-date document. This document was compiled during 1988 however, and therefore contains the principle changes brought about by the introduction of the Ionising Radiations Regulations (1985). The paper covers the nature of ionising radiation, its biological effects and the principles of control. It is hoped that the document will provide a useful source of information for both codes of practice and wider areas and stimulate readers to study radiological protection issues in greater depth. (author)

  16. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  17. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  18. Microseism Source Distribution Observed from Ireland

    Science.gov (United States)

    Craig, David; Bean, Chris; Donne, Sarah; Le Pape, Florian; Möllhoff, Martin

    2017-04-01

    Ocean generated microseisms (OGM) are recorded globally with similar spectral features observed everywhere. The generation mechanism for OGM and their subsequent propagation to continental regions has led to their use as a proxy for sea-state characteristics. Also many modern seismological methods make use of OGM signals. For example, the Earth's crust and upper mantle can be imaged using ``ambient noise tomography``. For many of these methods an understanding of the source distribution is necessary to properly interpret the results. OGM recorded on near coastal seismometers are known to be related to the local ocean wavefield. However, contributions from more distant sources may also be present. This is significant for studies attempting to use OGM as a proxy for sea-state characteristics such as significant wave height. Ireland has a highly energetic ocean wave climate and is close to one of the major source regions for OGM. This provides an ideal location to study an OGM source region in detail. Here we present the source distribution observed from seismic arrays in Ireland. The region is shown to consist of several individual source areas. These source areas show some frequency dependence and generally occur at or near the continental shelf edge. We also show some preliminary results from an off-shore OBS network to the North-West of Ireland. The OBS network includes instruments on either side of the shelf and should help interpret the array observations.

  19. COMPASS: A source term code for investigating capillary barrier performance

    International Nuclear Information System (INIS)

    Zhou, Wei; Apted, J.J.

    1996-01-01

    A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock

  20. Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)

  1. Side Information and Noise Learning for Distributed Video Coding using Optical Flow and Clustering

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers...... Transform Domain Wyner-Ziv (TDWZ) coding and proposes using optical flow to improve side information generation and clustering to improve noise modeling. The optical flow technique is exploited at the decoder side to compensate weaknesses of block based methods, when using motion-compensation to generate...... side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded (WZ) frames. Different techniques are combined by calculating a number of candidate soft side...

  2. Analysis of radiation field distribution in Yonggwang unit 3 with MCNP code

    International Nuclear Information System (INIS)

    Lee, Cheol Woo; Ha, Wi Ho; Shin, Chang Ho; Kim, Soon Young; Kim, Jong Kyung

    2004-01-01

    Radiation field analysis is performed at the inside of the containment building of nuclear power plant(NPP) using the well-known MCNP code. The target NPP in this study is Yonggwang Unit 3 Cycle 8. In this work, whole transport calculations were done using MCNPX 2.4.0 due to the functional benefits, such as Mesh Tally, that the code provides. The neutron spectra released from the operating reactor core were firstly evaluated as a radiation source term, and then dose distributions in the work areas of the NPP were calculated

  3. Distributed magnetic field positioning system using code division multiple access

    Science.gov (United States)

    Prigge, Eric A. (Inventor)

    2003-01-01

    An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.

  4. The dose distribution surrounding sup 192 Ir and sup 137 Cs seed sources

    Energy Technology Data Exchange (ETDEWEB)

    Thomason, C [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics; Mackie, T R [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics Wisconsin Univ., Madison, WI (USA). Dept. of Human Oncology; Lindstrom, M J [Wisconsin Univ., Madison, WI (USA). Biostatistics Center; Higgins, P D [Cleveland Clinic Foundation, OH (USA). Dept. of Radiation Oncology

    1991-04-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for {sup 192}Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a {sup 137}Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling {sup 192}Ir and {sup 137}Cs seed sources to obtain brachytherapy dose distributions. (author).

  5. The dose distribution surrounding 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Mackie, T.R.; Wisconsin Univ., Madison, WI; Lindstrom, M.J.; Higgins, P.D.

    1991-01-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for 192 Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a 137 Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling 192 Ir and 137 Cs seed sources to obtain brachytherapy dose distributions. (author)

  6. Superior Coherent Receivers for AF Relaying with Distributed Alamouti Code

    KAUST Repository

    Khan, Fahd Ahmed

    2012-01-01

    Coherent receivers are derived for a pilot-symbol aided distributed Alamouti-coded system with imperfect channel state information. The derived coherent receivers do not perform channel estimation but rather use the received pilot signals for decoding. The derived receiver metrics use the statistics of the channel to give improved performance. The performance is further improved by using the decision history. Simulation results show that a performance gain of up to 1.8 dB can be achieved for the new receivers with decision history as compared with the conventional mismatched coherent receiver. © 2011 IEEE.

  7. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  8. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  9. [Seasonal distribution of clinical case codes (DOC study)].

    Science.gov (United States)

    von Dercks, N; Melz, R; Hepp, P; Theopold, J; Marquass, B; Josten, C

    2017-02-01

    The German diagnosis-related groups remuneration system (G-DRG) was implemented in 2004 and patient-related diagnoses and procedures lead to allocation to specific DRGs. This system includes several codes, such as case mix (CM), case mix index (CMI) and number of cases. Seasonal distribution of these codes as well as distribution of diagnoses and DRGs may lead to logistical consequences for clinical management. From 2004 to 2013 all the main diagnoses and DRGs for inpatients were recorded. Monthly and seasonal distributions were analyzed using ANOVA. The average monthly number of cases was 265 ± 25 cases, the average CM was 388.50 ± 51.75 and the average CMI was 1.46 ± 0.15 with no significant seasonal differences (p > 0.1). Concussion was the most frequently occurring main diagnosis (3739 cases) followed by fractures of the humeral head (699). Significant distribution differences could be shown for humeral head fractures in monthly (p = 0.018) and seasonal comparisons (p = 0.006) with a maximum in winter. Radius (p = 0.01) and ankle fractures (p ≤ 0.001) also occurred most frequently in winter. Non-bony lesions of the shoulder were significantly less in spring (p = 0.04). The DRGs showed no evidence of a monthly or seasonal clustering (p > 0.1). The significant clustering of injuries in specific months and seasons should lead to logistic consequences (e.g. operating room slots, availability of nursing and anesthesia staff). For a needs assessment the analysis of main diagnoses is more appropriate than DRGs.

  10. Behavioral correlates of the distributed coding of spatial context.

    Science.gov (United States)

    Anderson, Michael I; Killing, Sarah; Morris, Caitlin; O'Donoghue, Alan; Onyiagha, Dikennam; Stevenson, Rosemary; Verriotis, Madeleine; Jeffery, Kathryn J

    2006-01-01

    Hippocampal place cells respond heterogeneously to elemental changes of a compound spatial context, suggesting that they form a distributed code of context, whereby context information is shared across a population of neurons. The question arises as to what this distributed code might be useful for. The present study explored two possibilities: one, that it allows contexts with common elements to be disambiguated, and the other, that it allows a given context to be associated with more than one outcome. We used two naturalistic measures of context processing in rats, rearing and thigmotaxis (boundary-hugging), to explore how rats responded to contextual novelty and to relate this to the behavior of place cells. In experiment 1, rats showed dishabituation of rearing to a novel reconfiguration of familiar context elements, suggesting that they perceived the reconfiguration as novel, a behavior that parallels that of place cells in a similar situation. In experiment 2, rats were trained in a place preference task on an open-field arena. A change in the arena context triggered renewed thigmotaxis, and yet navigation continued unimpaired, indicating simultaneous representation of both the altered contextual and constant spatial cues. Place cells similarly exhibited a dual population of responses, consistent with the hypothesis that their activity underlies spatial behavior. Together, these experiments suggest that heterogeneous context encoding (or "partial remapping") by place cells may function to allow the flexible assignment of associations to contexts, a faculty that could be useful in episodic memory encoding. Copyright (c) 2006 Wiley-Liss, Inc.

  11. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  12. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  13. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  14. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  15. Effect of source angular distribution on the evaluation of gamma-ray skyshine

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.D.; Jiang, S.H. [Dept. of Engineering and System Science, National Tsing Hua Univ., Taiwan (China); Chang, B.J.; Chen, I.J. [Division of Health Physics, Inst. of Nuclear Energy Research, Taiwan (China)

    2000-03-01

    The effect of the angular distribution of the equivalent point source on the analysis of the skyshine dose rates was investigated in detail. The dedicated skyshine codes SKYDOSE and McSKY were revised to include the capability of dealing with the anisotropic source. It was found that a replace of the cosine-distributed source by an isotropic source will overestimate the skyshine dose rates for large roof-subtended angles and cause underestimation for small roof-subtended angles. For building with roof shielding, however, replacing the cosine-distributed source by an isotropic source will always underestimate the skyshine dose rates. The skyshine dose rates from a volume source calculated by the dedicated skyshine code agree very well with those of the MCNP Monte Carlo calculation. (author)

  16. Quantum key distribution with entangled photon sources

    International Nuclear Information System (INIS)

    Ma Xiongfeng; Fung, Chi-Hang Fred; Lo, H.-K.

    2007-01-01

    A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDC source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill's security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70 dB combined channel losses (35 dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53 dB channel losses

  17. Investigating The Neutron Flux Distribution Of The Miniature Neutron Source Reactor MNSR Type

    International Nuclear Information System (INIS)

    Nguyen Hoang Hai; Do Quang Binh

    2011-01-01

    Neutron flux distribution is the important characteristic of nuclear reactor. In this article, four energy group neutron flux distributions of the miniature neutron source reactor MNSR type versus radial and axial directions are investigated in case the control rod is fully withdrawn. In addition, the effect of control rod positions on the thermal neutron flux distribution is also studied. The group constants for all reactor components are generated by the WIMSD code, and the neutron flux distributions are calculated by the CITATION code. The results show that the control rod positions only affect in the planning area for distribution in the region around the control rod. (author)

  18. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  19. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  20. Conflict free network coding for distributed storage networks

    KAUST Repository

    Al-Habob, Ahmed A.

    2015-06-01

    © 2015 IEEE. In this paper, we design a conflict free instantly decodable network coding (IDNC) solution for file download from distributed storage servers. Considering previously downloaded files at the clients from these servers as side information, IDNC can speed up the current download process. However, transmission conflicts can occur since multiple servers can simultaneously send IDNC combinations of files to the same client, which can tune to only one of them at a time. To avoid such conflicts and design more efficient coded download patterns, we propose a dual conflict IDNC graph model, which extends the conventional IDNC graph model in order to guarantee conflict free server transmissions to each of the clients. We then formulate the download time minimization problem as a stochastic shortest path problem whose action space is defined by the independent sets of this new graph. Given the intractability of the solution, we design a channel-aware heuristic algorithm and show that it achieves a considerable reduction in the file download time, compared to applying the conventional IDNC approach separately at each of the servers.

  1. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  2. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  3. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  4. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  5. Power Allocation Strategies for Distributed Space-Time Codes in Amplify-and-Forward Mode

    Directory of Open Access Journals (Sweden)

    Are Hjørungnes

    2009-01-01

    Full Text Available We consider a wireless relay network with Rayleigh fading channels and apply distributed space-time coding (DSTC in amplify-and-forward (AF mode. It is assumed that the relays have statistical channel state information (CSI of the local source-relay channels, while the destination has full instantaneous CSI of the channels. It turns out that, combined with the minimum SNR based power allocation in the relays, AF DSTC results in a new opportunistic relaying scheme, in which the best relay is selected to retransmit the source's signal. Furthermore, we have derived the optimum power allocation between two cooperative transmission phases by maximizing the average received SNR at the destination. Next, assuming M-PSK and M-QAM modulations, we analyze the performance of cooperative diversity wireless networks using AF opportunistic relaying. We also derive an approximate formula for the symbol error rate (SER of AF DSTC. Assuming the use of full-diversity space-time codes, we derive two power allocation strategies minimizing the approximate SER expressions, for constrained transmit power. Our analytical results have been confirmed by simulation results, using full-rate, full-diversity distributed space-time codes.

  6. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  7. SOURCES-3A: A code for calculating (α, n), spontaneous fission, and delayed neutron sources and spectra

    International Nuclear Information System (INIS)

    Perry, R.T.; Wilson, W.B.; Charlton, W.S.

    1998-04-01

    In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an

  8. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  9. Adaptive distributed video coding with correlation estimation using expectation propagation

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  10. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and

  11. A method for scientific code coupling in a distributed environment

    International Nuclear Information System (INIS)

    Caremoli, C.; Beaucourt, D.; Chen, O.; Nicolas, G.; Peniguel, C.; Rascle, P.; Richard, N.; Thai Van, D.; Yessayan, A.

    1994-12-01

    This guide book deals with coupling of big scientific codes. First, the context is introduced: big scientific codes devoted to a specific discipline coming to maturity, and more and more needs in terms of multi discipline studies. Then we describe different kinds of code coupling and an example of code coupling: 3D thermal-hydraulic code THYC and 3D neutronics code COCCINELLE. With this example we identify problems to be solved to realize a coupling. We present the different numerical methods usable for the resolution of coupling terms. This leads to define two kinds of coupling: with the leak coupling, we can use explicit methods, and with the strong coupling we need to use implicit methods. On both cases, we analyze the link with the way of parallelizing code. For translation of data from one code to another, we define the notion of Standard Coupling Interface based on a general structure for data. This general structure constitutes an intermediary between the codes, thus allowing a relative independence of the codes from a specific coupling. The proposed method for the implementation of a coupling leads to a simultaneous run of the different codes, while they exchange data. Two kinds of data communication with message exchange are proposed: direct communication between codes with the use of PVM product (Parallel Virtual Machine) and indirect communication with a coupling tool. This second way, with a general code coupling tool, is based on a coupling method, and we strongly recommended to use it. This method is based on the two following principles: re-usability, that means few modifications on existing codes, and definition of a code usable for coupling, that leads to separate the design of a code usable for coupling from the realization of a specific coupling. This coupling tool available from beginning of 1994 is described in general terms. (authors). figs., tabs

  12. Isodose distributions and dose uniformity in the Portuguese gamma irradiation facility calculated using the MCNP code

    CERN Document Server

    Oliveira, C

    2001-01-01

    A systematic study of isodose distributions and dose uniformity in sample carriers of the Portuguese Gamma Irradiation Facility was carried out using the MCNP code. The absorbed dose rate, gamma flux per energy interval and average gamma energy were calculated. For comparison purposes, boxes filled with air and 'dummy' boxes loaded with layers of folded and crumpled newspapers to achieve a given value of density were used. The magnitude of various contributions to the total photon spectra, including source-dependent factors, irradiator structures, sample material and other origins were also calculated.

  13. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  14. Stars with shell energy sources. Part 1. Special evolutionary code

    International Nuclear Information System (INIS)

    Rozyczka, M.

    1977-01-01

    A new version of the Henyey-type stellar evolution code is described and tested. It is shown, as a by-product of the tests, that the thermal time scale of the core of a red giant approaching the helium flash is of the order of the evolutionary time scale. The code itself appears to be a very efficient tool for investigations of the helium flash, carbon flash and the evolution of a white dwarf accreting mass. (author)

  15. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  16. Expected Range of Cooperation Between Transmission System Operators and Distribution System Operators After Implementation of ENTSO-E Grid Codes

    Directory of Open Access Journals (Sweden)

    Tomasz Pakulski

    2015-06-01

    Full Text Available The authors present the prospects of cooperation between transmission system operators (TSO and distribution system operators (DSO after entry into force ENTSO-E (European Network of Transmission System Operators for Electricity grid codes. New areas of DSO activities, associated with offering TSO aggregated services for national power system regulation based on the regulation resources connected to the distribution grid, and services on the distribution system level as part of the creation of local balancing areas (LBA are presented. The paper also presents the possibilities of providing ancillary services by different types of distributed generation sources in the distribution network. The LBA concept, which involves integrated management of local regulation resources including generation, demand, and energy storage is described. The options of the renewable energy sources (RES using for voltage and reactive power control in the distribution network with the use of wind farms (WF connected to the distribution system are characterized.

  17. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  18. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been

  19. Distributed multi-hypothesis coding of depth maps using texture motion information and optical flow

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Rakêt, Lars Lau

    2013-01-01

    Distributed Video Coding (DVC) is a video coding paradigm allowing a shift of complexity from the encoder to the decoder. Depth maps are images enabling the calculation of the distance of an object from the camera, which can be used in multiview coding in order to generate virtual views, but also...

  20. Recent progress on weight distributions of cyclic codes over finite fields

    Directory of Open Access Journals (Sweden)

    Hai Q. Dinh

    2015-01-01

    Full Text Available Cyclic codes are an interesting type of linear codes and have wide applications in communication and storage systems due to their efficient encoding and decoding algorithms. In coding theory it is often desirable to know the weight distribution of a cyclic code to estimate the error correcting capability and error probability. In this paper, we present the recent progress on the weight distributions of cyclic codes over finite fields, which had been determined by exponential sums. The cyclic codes with few weights which are very useful are discussed and their existence conditions are listed. Furthermore, we discuss the more general case of constacyclic codes and give some equivalences to characterize their weight distributions.

  1. Open Genetic Code: on open source in the life sciences

    OpenAIRE

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first ...

  2. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  3. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    Science.gov (United States)

    2010-12-01

    technical competence for the type of tests and calibrations SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 ...and exec t [ ISO / IEC 2005]. f a software system indicates that the SCALe analysis di by a CERT secure coding standard. Successful conforma antees that...to be more secure than non- systems. However, no study has yet been performed to p t ssment in accordance with ISO / IEC 17000: “a demonstr g to a

  4. Open Genetic Code : On open source in the life sciences

    NARCIS (Netherlands)

    Deibel, E.

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life

  5. Open Genetic Code: on open source in the life sciences.

    Science.gov (United States)

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  6. Sources And Compositional Distribution Of Polycyclic Aromatic ...

    African Journals Online (AJOL)

    For molecular mass 178, an anthracene to anthracene plus phenanthrene ratio ≤ 0.10 was taken as indication of petroleum related sources, while a ratio > 0.10 indicated dominance of combustion related sources. For molecular mass 202, a fluoranthene to fluoranthene plus pyrene ratio ≤ 0.50 was indication of petroleum ...

  7. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    Science.gov (United States)

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  8. Prioritized Degree Distribution in Wireless Sensor Networks with a Network Coded Data Collection Method

    Science.gov (United States)

    Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng

    2012-01-01

    The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious ‘cliff effect’ may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the ‘cliff effect’ is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected. PMID:23235451

  9. Prioritized degree distribution in wireless sensor networks with a network coded data collection method.

    Science.gov (United States)

    Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng

    2012-12-12

    The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious 'cliff effect' may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the 'cliff effect' is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected.

  10. Texture side information generation for distributed coding of video-plus-depth

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Raket, Lars Lau; Zamarin, Marco

    2013-01-01

    We consider distributed video coding in a monoview video-plus-depth scenario, aiming at coding textures jointly with their corresponding depth stream. Distributed Video Coding (DVC) is a video coding paradigm in which the complexity is shifted from the encoder to the decoder. The Side Information...... components) is strongly correlated, so the additional depth information may be used to generate more accurate SI for the texture stream, increasing the efficiency of the system. In this paper we propose various methods for accurate texture SI generation, comparing them with other state-of-the-art solutions...

  11. Building guide : how to build Xyce from source code.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric Richard; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Mei, Ting; Verley, Jason C.

    2013-08-01

    While Xyce uses the Autoconf and Automake system to configure builds, it is often necessary to perform more than the customary %E2%80%9C./configure%E2%80%9D builds many open source users have come to expect. This document describes the steps needed to get Xyce built on a number of common platforms.

  12. DIST: a computer code system for calculation of distribution ratios of solutes in the purex system

    Energy Technology Data Exchange (ETDEWEB)

    Tachimori, Shoichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-05-01

    Purex is a solvent extraction process for reprocessing the spent nuclear fuel using tri n-butylphosphate (TBP). A computer code system DIST has been developed to calculate distribution ratios for the major solutes in the Purex process. The DIST system is composed of database storing experimental distribution data of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}: DISTEX and of Zr(IV), Tc(VII): DISTEXFP and calculation programs to calculate distribution ratios of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}(DIST1), and Zr(IV), Tc(VII)(DITS2). The DIST1 and DIST2 determine, by the best-fit procedures, the most appropriate values of many parameters put on empirical equations by using the DISTEX data which fulfill the assigned conditions and are applied to calculate distribution ratios of the respective solutes. Approximately 5,000 data were stored in the DISTEX and DISTEXFP. In the present report, the following items are described, 1) specific features of DIST1 and DIST2 codes and the examples of calculation 2) explanation of databases, DISTEX, DISTEXFP and a program DISTIN, which manages the data in the DISTEX and DISTEXFP by functions as input, search, correction and delete. and at the annex, 3) programs of DIST1, DIST2, and figure-drawing programs DIST1G and DIST2G 4) user manual for DISTIN. 5) source programs of DIST1 and DIST2. 6) the experimental data stored in the DISTEX and DISTEXFP. (author). 122 refs.

  13. Peer-Assisted Content Distribution with Random Linear Network Coding

    DEFF Research Database (Denmark)

    Hundebøll, Martin; Ledet-Pedersen, Jeppe; Sluyterman, Georg

    2014-01-01

    Peer-to-peer networks constitute a widely used, cost-effective and scalable technology to distribute bandwidth-intensive content. The technology forms a great platform to build distributed cloud storage without the need of a central provider. However, the majority of todays peer-to-peer systems...

  14. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan

    2014-01-01

    We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...

  15. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.

  16. Automated Source Code Analysis to Identify and Remove Software Security Vulnerabilities: Case Studies on Java Programs

    OpenAIRE

    Natarajan Meghanathan

    2013-01-01

    The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and ...

  17. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost

  18. A Distributed Flow Rate Control Algorithm for Networked Agent System with Multiple Coding Rates to Optimize Multimedia Data Transmission

    Directory of Open Access Journals (Sweden)

    Shuai Zeng

    2013-01-01

    Full Text Available With the development of wireless technologies, mobile communication applies more and more extensively in the various walks of life. The social network of both fixed and mobile users can be seen as networked agent system. At present, kinds of devices and access network technology are widely used. Different users in this networked agent system may need different coding rates multimedia data due to their heterogeneous demand. This paper proposes a distributed flow rate control algorithm to optimize multimedia data transmission of the networked agent system with the coexisting various coding rates. In this proposed algorithm, transmission path and upload bandwidth of different coding rate data between source node, fixed and mobile nodes are appropriately arranged and controlled. On the one hand, this algorithm can provide user nodes with differentiated coding rate data and corresponding flow rate. On the other hand, it makes the different coding rate data and user nodes networked, which realizes the sharing of upload bandwidth of user nodes which require different coding rate data. The study conducts mathematical modeling on the proposed algorithm and compares the system that adopts the proposed algorithm with the existing system based on the simulation experiment and mathematical analysis. The results show that the system that adopts the proposed algorithm achieves higher upload bandwidth utilization of user nodes and lower upload bandwidth consumption of source node.

  19. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  20. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  1. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  2. Use of source term code package in the ELEBRA MX-850 system

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-12-01

    The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt

  3. Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry

    DEFF Research Database (Denmark)

    Stickland, Matt; Scanlon, Tom; Fabre, Sylvie

    2009-01-01

    Part of the overall NORSEWInD project is the use of LiDAR remote sensing (RS) systems mounted on offshore platforms to measure wind velocity profiles at a number of locations offshore. The data acquired from the offshore RS measurements will be fed into a large and novel wind speed dataset suitab...... between the results of simulations created by the commercial code FLUENT and the open source code OpenFOAM. An assessment of the ease with which the open source code can be used is also included....

  4. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  5. PRELIMINARY STUDY ON APPLICATION OF MAX PLUS ALGEBRA IN DISTRIBUTED STORAGE SYSTEM THROUGH NETWORK CODING

    Directory of Open Access Journals (Sweden)

    Agus Maman Abadi

    2016-04-01

    Full Text Available The increasing need in techniques of storing big data presents a new challenge. One way to address this challenge is the use of distributed storage systems. One strategy that implemented in distributed data storage systems is the use of Erasure Code which applied to network coding. The code used in this technique is based on the algebraic structure which is called as vector space. Some studies have also been carried out to create code that is based on other algebraic structures such as module.  In this study, we are going to try to set up a code based on the algebraic structure which is a generalization of the module that is semimodule by utilizing the max operations and sum operations at max plus algebra. The results of this study indicate that the max operation and the addition operation on max plus algebra cannot be used to establish a semimodule code, but by modifying the operation "+" as "min", we get a code based on semimodule. Keywords:   code, distributed storage systems, network coding, semimodule, max plus algebra

  6. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories

  7. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.

  8. Lysimeter data as input to performance assessment source term codes

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.; Rogers, R.D.; Sullivan, T.

    1992-01-01

    The Field Lysimeter Investigation: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-II c prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. In this paper, radionuclide releases from waste forms in the first seven years of sampling are presented and discussed. Application of lysimeter data to be used in performance assessment source term models is presented. Initial results from use of data in two models are discussed

  9. SCATTER: Source and Transport of Emplaced Radionuclides: Code documentation

    International Nuclear Information System (INIS)

    Longsine, D.E.

    1987-03-01

    SCATTER simulated several processes leading to the release of radionuclides to the site subsystem and then simulates transport via the groundwater of the released radionuclides to the biosphere. The processes accounted for to quantify release rates to a ground-water migration path include radioactive decay and production, leaching, solubilities, and the mixing of particles with incoming uncontaminated fluid. Several decay chains of arbitrary length can be considered simultaneously. The release rates then serve as source rates to a numerical technique which solves convective-dispersive transport for each decay chain. The decay chains are allowed to have branches and each member can have a different radioactive factor. Results are cast as radionuclide discharge rates to the accessible environment

  10. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  11. Tetrodotoxin: Chemistry, Toxicity, Source, Distribution and Detection

    Directory of Open Access Journals (Sweden)

    Vaishali Bane

    2014-02-01

    Full Text Available Tetrodotoxin (TTX is a naturally occurring toxin that has been responsible for human intoxications and fatalities. Its usual route of toxicity is via the ingestion of contaminated puffer fish which are a culinary delicacy, especially in Japan. TTX was believed to be confined to regions of South East Asia, but recent studies have demonstrated that the toxin has spread to regions in the Pacific and the Mediterranean. There is no known antidote to TTX which is a powerful sodium channel inhibitor. This review aims to collect pertinent information available to date on TTX and its analogues with a special emphasis on the structure, aetiology, distribution, effects and the analytical methods employed for its detection.

  12. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  13. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  14. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  15. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  16. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  17. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  18. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  19. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  20. Nuclear model codes and related software distributed by the OECD/NEA Data Bank

    International Nuclear Information System (INIS)

    Sartori, E.

    1993-01-01

    Software and data for nuclear energy applications is acquired, tested and distributed by several information centres; in particular, relevant computer codes are distributed internationally by the OECD/NEA Data Bank (France) and by ESTSC and EPIC/RSIC (United States). This activity is coordinated among the centres and is extended outside the OECD area through an arrangement with the IAEA. This article covers more specifically the availability of nuclear model codes and also those codes which further process their results into data sets needed for specific nuclear application projects. (author). 2 figs

  1. Ionizing nightglow: sources, intensity, and spatial distribution

    International Nuclear Information System (INIS)

    Young, J.M.; Troy, B.E. Jr.; Johnson, C.Y.; Holmes, J.C.

    1975-01-01

    Photometers carried aboard an Aerobee rocket mapped the ultraviolet night sky at White Sands, New Mexico. Maps for five 300 A passbands in the wavelength range 170 to 1400 A reveal spatial radiation patterns unique to each spectral subregion. The major ultraviolet features seen in these maps are ascribed to a variety of sources: 1) solar Lyman α (1216 A) and Lyman β (1026 A), resonantly scattered by geocoronal hydrogen; 2) solar HeII (304 A) resonantly scattered by ionized helium in the Earth's plasmasphere; 3) solar HeI (584 A) resonantly scattered by neutral helium in the interstellar wind and Doppler shifted so that it penetrates the Earth's helium blanket; and 4) starlight in the 912 to 1400 A band, primarily from early-type stars in the Orion region. Not explained are the presence of small, but measurable, albedo signals observed near the peak of flight. Intensities vary from several kilorayleighs for Lyman α to a few rayleighs for HeII. (auth)

  2. Brightness distribution data on 2918 radio sources at 365 MHz

    International Nuclear Information System (INIS)

    Cotton, W.D.; Owen, F.N.; Ghigo, F.D.

    1975-01-01

    This paper is the second in a series describing the results of a program attempting to fit models of the brightness distribution to radio sources observed at 365 MHz with the Bandwidth Synthesis Interferometer (BSI) operated by the University of Texas Radio Astronomy Observatory. Results for a further 2918 radio sources are given. An unresolved model and three symmetric extended models with angular sizes in the range 10--70 arcsec were attempted for each radio source. In addition, for 348 sources for which other observations of brightness distribution are published, the reference to the observations and a brief description are included

  3. Distributional sources for Newman's holomorphic Coulomb field

    International Nuclear Information System (INIS)

    Kaiser, Gerald

    2004-01-01

    Newman (1973 J. Math. Phys. 14 102-3) considered the holomorphic extension E-tilde(z) of the Coulomb field E(x) in R 3 . From an analysis of its multipole expansion, he concluded that the real and imaginary parts E(x+iy)≡Re E-tilde(x+iy), H(x+iy)≡Im E-tilde(x+iy), viewed as functions of x, are the electric and magnetic fields generated by a spinning ring of charge R. This represents the EM part of the Kerr-Newman solution to the Einstein-Maxwell equations (Newman E T and Janis A I 1965 J. Math. Phys. 6 915-7; Newman E T et al 1965 J. Math. Phys. 6 918-9). As already pointed out in Newman and Janis (1965 J. Math. Phys. 6 915-7), this interpretation is somewhat problematic since the fields are double-valued. To make them single-valued, a branch cut must be introduced so that R is replaced by a charged disc D having R as its boundary. In the context of curved spacetime, D becomes a spinning disc of charge and mass representing the singularity of the Kerr-Newman solution. Here we confirm the above interpretation of E and H without resorting to asymptotic expansions, by computing the charge and current densities directly as distributions in R 3 supported in D. This will show that D spins rigidly at the critical rate so that its rim R moves at the speed of light

  4. Electromagnetic projectile acceleration utilizing distributed energy sources

    International Nuclear Information System (INIS)

    Parker, J.V.

    1982-01-01

    Circuit equations are derived for an electromagnetic projectile accelerator (railgun) powered by a large number of capacitive discharge circuits distributed along its length. The circuit equations are put into dimensionless form and the parameters governing the solutions derived. After specializing the equations to constant spacing between circuits, the case of lossless rails and negligible drag is analyzed to show that the electrical to kinetic energy transfer efficiency is equal to sigma/2, where sigma = 2mS/Lq 2 0 and m is the projectile mass, S the distance between discharge circuit, Lthe rail inductance per unit length, and q 0 the charge on the first stage capacitor. For sigma = 2 complete transfer of electrical to kinetic energy is predicted while for sigma>2 the projective-discharge circuit system is unstable. Numerical solutions are presented for both lossless rails and for finite rail resistance. When rail resistance is included, >70% transfer is calculated for accelerators of arbitrary length. The problem of projectile startup is considered and a simple modification of the first two stages is described which provides proper startup. Finally, the results of the numerical solutions are applied to a practical railgun design. A research railgun designed for repeated operation at 50 km/sec is described. It would have an overall length of 77 m, an electrical efficiency of 81%, a stored energy per stage of 105 kJ, and a charge transfer of <50 C per stage. A railgun of this design appears to be practicable with current pulsed power technology

  5. Searching Malware and Sources of Its Distribution in the Internet

    Directory of Open Access Journals (Sweden)

    L. L. Protsenko

    2011-09-01

    Full Text Available In the article is considered for the first time developed by the author algorithm of searching malware and sources of its distribution, based on published HijackThis logs in the Internet.

  6. Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources

    International Nuclear Information System (INIS)

    Wheatley, J. S.

    2004-01-01

    The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs

  7. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  8. Iterative Multiview Side Information for Enhanced Reconstruction in Distributed Video Coding

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Distributed video coding (DVC is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW and Wyner and Ziv (WZ. DVC entails low-complexity encoders as well as separate encoding of correlated video sources. This is particularly attractive for multiview camera systems in video surveillance and camera sensor network applications, where low complexity is required at the encoder. In addition, the separate encoding of the sources implies no communication between the cameras in a practical scenario. This is an advantage since communication is time and power consuming and requires complex networking. In this work, different intercamera estimation techniques for side information (SI generation are explored and compared in terms of estimating quality, complexity, and rate distortion (RD performance. Further, a technique called iterative multiview side information (IMSI is introduced, where the final SI is used in an iterative reconstruction process. The simulation results show that IMSI significantly improves the RD performance for video with significant motion and activity. Furthermore, DVC outperforms AVC/H.264 Intra for video with average and low motion but it is still inferior to the Inter No Motion and Inter Motion modes.

  9. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    International Nuclear Information System (INIS)

    Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon

    2006-01-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors

  10. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  11. Sediment sources and their Distribution in Chwaka Bay, Zanzibar ...

    African Journals Online (AJOL)

    This work establishes sediment sources, character and their distribution in Chwaka Bay using (i) stable isotopes compositions of organic carbon (OC) and nitrogen, (ii) contents of OC, nitrogen and CaCO3, (iii) C/N ratios, (iv) distribution of sediment mean grain size and sorting, and (v) thickness of unconsolidated sediments.

  12. Activity distribution of a cobalt-60 teletherapy source

    International Nuclear Information System (INIS)

    Jaffray, D.A.; Munro, P.; Battista, J.J.; Fenster, A.

    1991-01-01

    In the course of quantifying the effect of radiation source size on the spatial resolution of portal images, a concentric ring structure in the activity distribution of a Cobalt-60 teletherapy source has been observed. The activity distribution was measured using a strip integral technique and confirmed independently by a contact radiograph of an identical but inactive source replica. These two techniques suggested that this concentric ring structure is due to the packing configuration of the small 60Co pellets that constitute the source. The source modulation transfer function (MTF) showed that this ring structure has a negligible influence on the spatial resolution of therapy images when compared to the effect of the large size of the 60Co source

  13. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)

  14. Dose Distribution Calculation Using MCNPX Code in the Gamma-ray Irradiation Cell

    International Nuclear Information System (INIS)

    Kim, Yong Ho

    1991-02-01

    60 Co-gamma irradiators have long been used for foods sterilization, plant mutation and development of radio-protective agents, radio-sensitizers and other purposes. The Applied Radiological Science Research Institute of Cheju National University has a multipurpose gamma irradiation facility loaded with a MDS Nordin standard 60 Co source (C188), of which the initial activity was 400 TBq (10,800 Ci) on February 19, 2004. This panoramic gamma irradiator is designed to irradiate in all directions various samples such as plants, cultured cells and mice to administer given radiation doses. In order to give accurate doses to irradiation samples, appropriate methods of evaluating, both by calculation and measurement, the radiation doses delivered to the samples should be set up. Computational models have been developed to evaluate the radiation dose distributions inside the irradiation chamber and the radiation doses delivered to typical biolological samples which are frequently irradiated in the facility. The computational models are based on using the MCNPX code. The horizontal and vertical dose distributions has been calculated inside the irradiation chamber and compared the calculated results with measured data obtained with radiation dosimeters to verify the computational models. The radiation dosimeters employed are a Famer's type ion chamber and MOSFET dosimeters. Radiation doses were calculated by computational models, which were delivered to cultured cell samples contained in test tubes and to a mouse fixed in a irradiation cage, and compared the calculated results with the measured data. The computation models are also tested to see if they can accurately simulate the case where a thick lead shield is placed between the source and detector. Three tally options of the MCNPX code, F4, F5 and F6, are alternately used to see which option produces optimum results. The computation models are also used to calculate gamma ray energy spectra of a BGO scintillator at

  15. Packing simulation code to calculate distribution function of hard spheres by Monte Carlo method : MCRDF

    International Nuclear Information System (INIS)

    Murata, Isao; Mori, Takamasa; Nakagawa, Masayuki; Shirai, Hiroshi.

    1996-03-01

    High Temperature Gas-cooled Reactors (HTGRs) employ spherical fuels named coated fuel particles (CFPs) consisting of a microsphere of low enriched UO 2 with coating layers in order to prevent FP release. There exist many spherical fuels distributed randomly in the cores. Therefore, the nuclear design of HTGRs is generally performed on the basis of the multigroup approximation using a diffusion code, S N transport code or group-wise Monte Carlo code. This report summarizes a Monte Carlo hard sphere packing simulation code to simulate the packing of equal hard spheres and evaluate the necessary probability distribution of them, which is used for the application of the new Monte Carlo calculation method developed to treat randomly distributed spherical fuels with the continuous energy Monte Carlo method. By using this code, obtained are the various statistical values, namely Radial Distribution Function (RDF), Nearest Neighbor Distribution (NND), 2-dimensional RDF and so on, for random packing as well as ordered close packing of FCC and BCC. (author)

  16. The Competition Between a Localised and Distributed Source of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  17. Distribution of absorbed dose in human eye simulated by SRNA-2KG computer code

    International Nuclear Information System (INIS)

    Ilic, R.; Pesic, M.; Pavlovic, R.; Mostacci, D.

    2003-01-01

    Rapidly increasing performances of personal computers and development of codes for proton transport based on Monte Carlo methods will allow, very soon, the introduction of the computer planning proton therapy as a normal activity in regular hospital procedures. A description of SRNA code used for such applications and results of calculated distributions of proton-absorbed dose in human eye are given in this paper. (author)

  18. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  19. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  20. Determining the temperature and density distribution from a Z-pinch radiation source

    International Nuclear Information System (INIS)

    Matuska, W.; Lee, H.

    1997-01-01

    High temperature radiation sources exceeding one hundred eV can be produced via z-pinches using currently available pulsed power. The usual approach to compare the z-pinch simulation and experimental data is to convert the radiation output at the source, whose temperature and density distributions are computed from the 2-D MHD code, into simulated data such as a spectrometer reading. This conversion process involves a radiation transfer calculation through the axially symmetric source, assuming local thermodynamic equilibrium (LTE), and folding the radiation that reaches the detector with the frequency-dependent response function. In this paper the authors propose a different approach by which they can determine the temperature and density distributions of the radiation source directly from the spatially resolved spectral data. This unfolding process is reliable and unambiguous for the ideal case where LTE holds and the source is axially symmetric. In reality, imperfect LTE and axial symmetry will introduce inaccuracies into the unfolded distributions. The authors use a parameter optimization routine to find the temperature and density distributions that best fit the data. They know from their past experience that the radiation source resulting from the implosion of a thin foil does not exhibit good axial symmetry. However, recent experiments carried out at Sandia National Laboratory using multiple wire arrays were very promising to achieve reasonably good symmetry. For these experiments the method will provide a valuable diagnostic tool

  1. Rascal: A domain specific language for source code analysis and manipulation

    NARCIS (Netherlands)

    P. Klint (Paul); T. van der Storm (Tijs); J.J. Vinju (Jurgen); A. Walenstein; S. Schuppe

    2009-01-01

    htmlabstractMany automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This

  2. RASCAL : a domain specific language for source code analysis and manipulationa

    NARCIS (Netherlands)

    Klint, P.; Storm, van der T.; Vinju, J.J.

    2009-01-01

    Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance

  3. From system requirements to source code: transitions in UML and RUP

    Directory of Open Access Journals (Sweden)

    Stanisław Wrycza

    2011-06-01

    Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.

  4. Precise Mapping Of A Spatially Distributed Radioactive Source

    International Nuclear Information System (INIS)

    Beck, A.; Caras, I.; Piestum, S.; Sheli, E.; Melamud, Y.; Berant, S.; Kadmon, Y.; Tirosh, D.

    1999-01-01

    Spatial distribution measurement of radioactive sources is a routine task in the nuclear industry. The precision of each measurement depends upon the specific application. However, the technological edge of this precision is motivated by the production of standards for calibration. Within this definition, the most demanding field is the calibration of standards for medical equipment. In this paper, a semi-empirical method for controlling the measurement precision is demonstrated, using a relatively simple laboratory apparatus. The spatial distribution of the source radioactivity is measured as part of the quality assurance tests, during the production of flood sources. These sources are further used in calibration of medical gamma cameras. A typical flood source is a 40 x 60 cm 2 plate with an activity of 10 mCi (or more) of 57 Co isotope. The measurement set-up is based on a single NaI(Tl) scintillator with a photomultiplier tube, moving on an X Y table which scans the flood source. In this application the source is required to have a uniform activity distribution over its surface

  5. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    International Nuclear Information System (INIS)

    Nava-Dominguez, A.; Rao, Y.F.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles

  6. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    Energy Technology Data Exchange (ETDEWEB)

    Nava-Dominguez, A., E-mail: navadoma@aecl.ca; Rao, Y.F., E-mail: raoy@aecl.ca; Waddington, G.M., E-mail: waddingg@aecl.ca

    2014-08-15

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles.

  7. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  8. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  9. Continuous-variable quantum key distribution with Gaussian source noise

    International Nuclear Information System (INIS)

    Shen Yujie; Peng Xiang; Yang Jian; Guo Hong

    2011-01-01

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  10. Uncertainties in source term calculations generated by the ORIGEN2 computer code for Hanford Production Reactors

    International Nuclear Information System (INIS)

    Heeb, C.M.

    1991-03-01

    The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs

  11. Code of practice for the use of sealed radioactive sources in borehole logging (1998)

    International Nuclear Information System (INIS)

    1989-12-01

    The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills

  12. Development of unfolding method to obtain pin-wise source strength distribution from PWR spent fuel assembly measurement

    International Nuclear Information System (INIS)

    Sitompul, Yos Panagaman; Shin, Hee-Sung; Park, Se-Hwan; Oh, Jong Myeong; Seo, Hee; Kim, Ho Dong

    2013-01-01

    An unfolding method has been developed to obtain a pin-wise source strength distribution of a 14 × 14 pressurized water reactor (PWR) spent fuel assembly. Sixteen measured gamma dose rates at 16 control rod guide tubes of an assembly are unfolded to 179 pin-wise source strengths of the assembly. The method calculates and optimizes five coefficients of the quadratic fitting function for X-Y source strength distribution, iteratively. The pin-wise source strengths are obtained at the sixth iteration, with a maximum difference between two sequential iterations of about 0.2%. The relative distribution of pin-wise source strength from the unfolding is checked using a comparison with the design code (Westinghouse APA code). The result shows that the relative distribution from the unfolding and design code is consistent within a 5% difference. The absolute value of the pin-wise source strength is also checked by reproducing the dose rates at the measurement points. The result shows that the pin-wise source strengths from the unfolding reproduce the dose rates within a 2% difference. (author)

  13. Verification of the network flow and transport/distributed velocity (NWFT/DVM) computer code

    International Nuclear Information System (INIS)

    Duda, L.E.

    1984-05-01

    The Network Flow and Transport/Distributed Velocity Method (NWFT/DVM) computer code was developed primarily to fulfill a need for a computationally efficient ground-water flow and contaminant transport capability for use in risk analyses where, quite frequently, large numbers of calculations are required. It is a semi-analytic, quasi-two-dimensional network code that simulates ground-water flow and the transport of dissolved species (radionuclides) in a saturated porous medium. The development of this code was carried out under a program funded by the US Nuclear Regulatory Commission (NRC) to develop a methodology for assessing the risk from disposal of radioactive wastes in deep geologic formations (FIN: A-1192 and A-1266). In support to the methodology development program, the NRC has funded a separate Maintenance of Computer Programs Project (FIN: A-1166) to ensure that the codes developed under A-1192 or A-1266 remain consistent with current operating systems, are as error-free as possible, and have up-to-date documentations for reference by the NRC staff. Part of this effort would include verification and validation tests to assure that a code correctly performs the operations specified and/or is representing the processes or system for which it is intended. This document contains four verification problems for the NWFT/DVM computer code. Two of these problems are analytical verifications of NWFT/DVM where results are compared to analytical solutions. The other two are code-to-code verifications where results from NWFT/DVM are compared to those of another computer code. In all cases NWFT/DVM showed good agreement with both the analytical solutions and the results from the other code

  14. A distributed code for colour in natural scenes derived from centre-surround filtered cone signals

    Directory of Open Access Journals (Sweden)

    Christian Johannes Kellner

    2013-09-01

    Full Text Available In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for colour. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in colour space. Here we investigated how the nonlinear spatiochromatic filtering in the retina influences the encoding of colour signals. Cone signals were derived from hyperspectral images of natural scenes and pre-processed by centre-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in colour space, similar to the population code of colour in the early visual cortex. Our results indicate that spatiochromatic processing in the retina leads to a more distributed and more efficient code for natural scenes.

  15. Effects of physics change in Monte Carlo code on electron pencil beam dose distributions

    International Nuclear Information System (INIS)

    Toutaoui, Abdelkader; Khelassi-Toutaoui, Nadia; Brahimi, Zakia; Chami, Ahmed Chafik

    2012-01-01

    Pencil beam algorithms used in computerized electron beam dose planning are usually described using the small angle multiple scattering theory. Alternatively, the pencil beams can be generated by Monte Carlo simulation of electron transport. In a previous work, the 4th version of the Electron Gamma Shower (EGS) Monte Carlo code was used to obtain dose distributions from monoenergetic electron pencil beam, with incident energy between 1 MeV and 50 MeV, interacting at the surface of a large cylindrical homogeneous water phantom. In 2000, a new version of this Monte Carlo code has been made available by the National Research Council of Canada (NRC), which includes various improvements in its electron-transport algorithms. In the present work, we were interested to see if the new physics in this version produces pencil beam dose distributions very different from those calculated with oldest one. The purpose of this study is to quantify as well as to understand these differences. We have compared a series of pencil beam dose distributions scored in cylindrical geometry, for electron energies between 1 MeV and 50 MeV calculated with two versions of the Electron Gamma Shower Monte Carlo Code. Data calculated and compared include isodose distributions, radial dose distributions and fractions of energy deposition. Our results for radial dose distributions show agreement within 10% between doses calculated by the two codes for voxels closer to the pencil beam central axis, while the differences are up to 30% for longer distances. For fractions of energy deposition, the results of the EGS4 are in good agreement (within 2%) with those calculated by EGSnrc at shallow depths for all energies, whereas a slightly worse agreement (15%) is observed at deeper distances. These differences may be mainly attributed to the different multiple scattering for electron transport adopted in these two codes and the inclusion of spin effect, which produces an increase of the effective range of

  16. Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma

    Science.gov (United States)

    Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.

    2017-10-01

    For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that

  17. Implementation and Performance Evaluation of Distributed Cloud Storage Solutions using Random Linear Network Coding

    DEFF Research Database (Denmark)

    Fitzek, Frank; Toth, Tamas; Szabados, Áron

    2014-01-01

    This paper advocates the use of random linear network coding for storage in distributed clouds in order to reduce storage and traffic costs in dynamic settings, i.e. when adding and removing numerous storage devices/clouds on-the-fly and when the number of reachable clouds is limited. We introduce...... various network coding approaches that trade-off reliability, storage and traffic costs, and system complexity relying on probabilistic recoding for cloud regeneration. We compare these approaches with other approaches based on data replication and Reed-Solomon codes. A simulator has been developed...... to carry out a thorough performance evaluation of the various approaches when relying on different system settings, e.g., finite fields, and network/storage conditions, e.g., storage space used per cloud, limited network use, and limited recoding capabilities. In contrast to standard coding approaches, our...

  18. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    International Nuclear Information System (INIS)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J.

    2007-03-01

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future

  19. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J

    2007-03-15

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future.

  20. On the effectiveness of recoding-based repair in network coded distributed storage

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Braun, Patrik J.; Roetter, Daniel Enrique Lucani

    2017-01-01

    High capacity storage systems distribute less across several storage devices (nodes) and apply an erasure code to meet availability and reliability requirements. Since devices can lose network connectivity or fail permanently, a dynamic repair mechanism must be put in place. In such cases a new r...

  1. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  2. Model of charge-state distributions for electron cyclotron resonance ion source plasmas

    Directory of Open Access Journals (Sweden)

    D. H. Edgell

    1999-12-01

    Full Text Available A computer model for the ion charge-state distribution (CSD in an electron cyclotron resonance ion source (ECRIS plasma is presented that incorporates non-Maxwellian distribution functions, multiple atomic species, and ion confinement due to the ambipolar potential well that arises from confinement of the electron cyclotron resonance (ECR heated electrons. Atomic processes incorporated into the model include multiple ionization and multiple charge exchange with rate coefficients calculated for non-Maxwellian electron distributions. The electron distribution function is calculated using a Fokker-Planck code with an ECR heating term. This eliminates the electron temperature as an arbitrary user input. The model produces results that are a good match to CSD data from the ANL-ECRII ECRIS. Extending the model to 1D axial will also allow the model to determine the plasma and electrostatic potential profiles, further eliminating arbitrary user input to the model.

  3. Minimum-phase distribution of cosmic source brightness

    International Nuclear Information System (INIS)

    Gal'chenko, A.A.; Malov, I.F.; Mogil'nitskaya, L.F.; Frolov, V.A.

    1984-01-01

    Minimum-phase distributions of brightness (profiles) for cosmic radio sources 3C 144 (the wave lambda=21 cm), 3C 338 (lambda=3.5 m), and 3C 353 (labda=31.3 cm and 3.5 m) are obtained. A real possibility for the profile recovery from module fragments of its Fourier-image is shown

  4. Geometric effects in alpha particle detection from distributed air sources

    International Nuclear Information System (INIS)

    Gil, L.R.; Leitao, R.M.S.; Marques, A.; Rivera, A.

    1994-08-01

    Geometric effects associated to detection of alpha particles from distributed air sources, as it happens in Radon and Thoron measurements, are revisited. The volume outside which no alpha particle may reach the entrance window of the detector is defined and determined analytically for rectangular and cylindrical symmetry geometries. (author). 3 figs

  5. Spatial distribution of saline water and possible sources of intrusion ...

    African Journals Online (AJOL)

    The spatial distribution of saline water and possible sources of intrusion into Lekki lagoon and transitional effects on the lacustrine ichthyofaunal characteristics were studied during March, 2006 and February, 2008. The water quality analysis indicated that, salinity has drastically increased recently in the lagoon (0.007 to ...

  6. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  7. Galactic distribution of X-ray burst sources

    International Nuclear Information System (INIS)

    Lewin, W.H.G.; Hoffman, J.A.; Doty, J.; Clark, G.W.; Swank, J.H.; Becker, R.H.; Pravdo, S.H.; Serlemitsos, P.J.

    1977-01-01

    It is stated that 18 X-ray burst sources have been observed to date, applying the following definition for these bursts - rise times of less than a few seconds, durations of seconds to minutes, and recurrence in some regular pattern. If single burst events that meet the criteria of rise time and duration, but not recurrence are included, an additional seven sources can be added. A sky map is shown indicating their positions. The sources are spread along the galactic equator and cluster near low galactic longitudes, and their distribution is different from that of the observed globular clusters. Observations based on the SAS-3 X-ray observatory studies and the Goddard X-ray Spectroscopy Experiment on OSO-9 are described. The distribution of the sources is examined and the effect of uneven sky exposure on the observed distribution is evaluated. It has been suggested that the bursts are perhaps produced by remnants of disrupted globular clusters and specifically supermassive black holes. This would imply the existence of a new class of unknown objects, and at present is merely an ad hoc method of relating the burst sources to globular clusters. (U.K.)

  8. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  9. Deformation due to distributed sources in micropolar thermodiffusive medium

    Directory of Open Access Journals (Sweden)

    Sachin Kaushal

    2010-10-01

    Full Text Available The general solution to the field equations in micropolar generalized thermodiffusive in the context of G-L theory is investigated by applying the Laplace and Fourier transform's as a result of various sources. An application of distributed normal forces or thermal sources or potential sources has been taken to show the utility of the problem. To get the solution in the physical form, a numerical inversion technique has been applied. The transformed components of stress, temperature distribution and chemical potential for G-L theory and CT theory has been depicted graphically and results are compared analytically to show the impact of diffusion, relaxation times and micropolarity on these quantities. Some special case of interest are also deduced from present investigation.

  10. Effect of tissue inhomogeneity on dose distribution of point sources of low-energy electrons

    International Nuclear Information System (INIS)

    Kwok, C.S.; Bialobzyski, P.J.; Yu, S.K.; Prestwich, W.V.

    1990-01-01

    Perturbation in dose distributions of point sources of low-energy electrons at planar interfaces of cortical bone (CB) and red marrow (RM) was investigated experimentally and by Monte Carlo codes EGS and the TIGER series. Ultrathin LiF thermoluminescent dosimeters were used to measure the dose distributions of point sources of 204 Tl and 147 Pm in RM. When the point sources were at 12 mg/cm 2 from a planar interface of CB and RM equivalent plastics, dose enhancement ratios in RM averaged over the region 0--12 mg/cm 2 from the interface were measured to be 1.08±0.03 (SE) and 1.03±0.03 (SE) for 204 Tl and 147 Pm, respectively. The Monte Carlo codes predicted 1.05±0.02 and 1.01±0.02 for the two nuclides, respectively. However, EGS gave consistently 3% higher dose in the dose scoring region than the TIGER series when point sources of monoenergetic electrons up to 0.75 MeV energy were considered in the homogeneous RM situation or in the CB and RM heterogeneous situation. By means of the TIGER series, it was demonstrated that aluminum, which is normally assumed to be equivalent to CB in radiation dosimetry, leads to an overestimation of backscattering of low-energy electrons in soft tissue at a CB--soft-tissue interface by as much as a factor of 2

  11. A Linear Algebra Framework for Static High Performance Fortran Code Distribution

    Directory of Open Access Journals (Sweden)

    Corinne Ancourt

    1997-01-01

    Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.

  12. MOCARS: a Monte Carlo code for determining the distribution and simulation limits

    International Nuclear Information System (INIS)

    Matthews, S.D.

    1977-07-01

    MOCARS is a computer program designed for the INEL CDC 76-173 operating system to determine the distribution and simulation limits for a function by Monte Carlo techniques. The code randomly samples data from any of the 12 user-specified distributions and then either evaluates the cut set system unavailability or a user-specified function with the sample data. After the data are ordered, the values at various quantities and associated confidence bounds are calculated for output. Also available for output on microfilm are the frequency and cumulative distribution histograms from the sample data. 29 figures, 4 tables

  13. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  14. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  15. Dose distribution and dosimetry parameters calculation of MED3633 Palladium-103 source in water phantom using MCNP

    International Nuclear Information System (INIS)

    Mowlavi, A. A.; Binesh, A.; Moslehitabar, H.

    2006-01-01

    Palladium-103 ( 103 Pd) is a brachytherapy source for cancer treatment. The Monte Carlo codes are usually applied for dose distribution and effect of shieldings. Monte Carlo calculation of dose distribution in water phantom due to a MED3633 103 Pd source is presented in this work. Materials and Methods: The dose distribution around the 10 3Pd Model MED3633 located in the center of 30*30*30 m 3 water phantom cube was calculated using MCNP code by the Monte Carlo method. The percentage depth dose variation along the different axis parallel and perpendicular to the source was also calculated. Then, the isodose curves for 100%, 75%, 50% and 25% percentage depth dose and dosimetry parameters of TG-43 protocol were determined. Results: The results show that the Monte Carlo Method could calculate dose deposition in high gradient region, near the source, accurately. The isodose curves and dosimetric characteristics obtained for MED3633 103 Pd source are in good agreement with published results. Conclusion: The isodose curves of the MED3633 103 Pd source have been derived form dose calculation by MCNP code. The calculated dosimetry parameters for the source agree quite well with their Monte Carlo calculated and experimental measurement values

  16. Survey of source code metrics for evaluating testability of object oriented systems

    OpenAIRE

    Shaheen , Muhammad Rabee; Du Bousquet , Lydie

    2010-01-01

    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...

  17. Sources and distribution of anthropogenic radionuclides in different marine environments

    International Nuclear Information System (INIS)

    Holm, E.

    1997-01-01

    The knowledge of the distribution in time and space radiologically important radionuclides from different sources in different marine environments is important for assessment of dose commitment following controlled or accidental releases and for detecting eventual new sources. Present sources from nuclear explosion tests, releases from nuclear facilities and the Chernobyl accident provide a tool for such studies. The different sources can be distinguished by different isotopic and radionuclide composition. Results show that radiocaesium behaves rather conservatively in the south and north Atlantic while plutonium has a residence time of about 8 years. On the other hand enhanced concentrations of plutonium in surface waters in arctic regions where vertical mixing is small and iceformation plays an important role. Significantly increased concentrations of plutonium are also found below the oxic layer in anoxic basins due to geochemical concentration. (author)

  18. NEACRP comparison of source term codes for the radiation protection assessment of transportation packages

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Locke, H.F.; Avery, A.F.

    1994-01-01

    The results for Problems 5 and 6 of the NEACRP code comparison as submitted by six participating countries are presented in summary. These problems concentrate on the prediction of the neutron and gamma-ray sources arising in fuel after a specified irradiation, the fuel being uranium oxide for problem 5 and a mixture of uranium and plutonium oxides for problem 6. In both problems the predicted neutron sources are in good agreement for all participants. For gamma rays, however, there are differences, largely due to the omission of bremsstrahlung in some calculations

  19. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  20. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  1. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  2. A Heuristic Approach to Distributed Generation Source Allocation for Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    M. Sharma

    2010-12-01

    Full Text Available The recent trends in electrical power distribution system operation and management are aimed at improving system conditions in order to render good service to the customer. The reforms in distribution sector have given major scope for employment of distributed generation (DG resources which will boost the system performance. This paper proposes a heuristic technique for allocation of distribution generation source in a distribution system. The allocation is determined based on overall improvement in network performance parameters like reduction in system losses, improvement in voltage stability, improvement in voltage profile. The proposed Network Performance Enhancement Index (NPEI along with the heuristic rules facilitate determination of feasible location and corresponding capacity of DG source. The developed approach is tested with different test systems to ascertain its effectiveness.

  3. A New Quantum Key Distribution Scheme Based on Frequency and Time Coding

    International Nuclear Information System (INIS)

    Chang-Hua, Zhu; Chang-Xing, Pei; Dong-Xiao, Quan; Jing-Liang, Gao; Nan, Chen; Yun-Hui, Yi

    2010-01-01

    A new scheme of quantum key distribution (QKD) using frequency and time coding is proposed, in which the security is based on the frequency-time uncertainty relation. In this scheme, the binary information sequence is encoded randomly on either the central frequency or the time delay of the optical pulse at the sender. The central frequency of the single photon pulse is set as ω 1 for bit 0 and set as ω 2 for bit 1 when frequency coding is selected. However, the single photon pulse is not delayed for bit 0 and is delayed in τ for 1 when time coding is selected. At the receiver, either the frequency or the time delay of the pulse is measured randomly, and the final key is obtained after basis comparison, data reconciliation and privacy amplification. With the proposed method, the effect of the noise in the fiber channel and environment on the QKD system can be reduced effectively

  4. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-03-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimised to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the dataset. As a test case we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model we find that the best fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  5. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-06-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  6. Assessment of ocular beta radiation dose distribution due to 106Ru/106Rh brachytherapy applicators using MCNPX Monte Carlo code

    Directory of Open Access Journals (Sweden)

    Nilseia Aparecida Barbosa

    2014-08-01

    Full Text Available Purpose: Melanoma at the choroid region is the most common primary cancer that affects the eye in adult patients. Concave ophthalmic applicators with 106Ru/106Rh beta sources are the more used for treatment of these eye lesions, mainly lesions with small and medium dimensions. The available treatment planning system for 106Ru applicators is based on dose distributions on a homogeneous water sphere eye model, resulting in a lack of data in the literature of dose distributions in the eye radiosensitive structures, information that may be crucial to improve the treatment planning process, aiming the maintenance of visual acuity. Methods: The Monte Carlo code MCNPX was used to calculate the dose distribution in a complete mathematical model of the human eye containing a choroid melanoma; considering the eye actual dimensions and its various component structures, due to an ophthalmic brachytherapy treatment, using 106Ru/106Rh beta-ray sources. Two possibilities were analyzed; a simple water eye and a heterogeneous eye considering all its structures. Two concave applicators, CCA and CCB manufactured by BEBIG and a complete mathematical model of the human eye were modeled using the MCNPX code. Results and Conclusion: For both eye models, namely water model and heterogeneous model, mean dose values simulated for the same eye regions are, in general, very similar, excepting for regions very distant from the applicator, where mean dose values are very low, uncertainties are higher and relative differences may reach 20.4%. For the tumor base and the eye structures closest to the applicator, such as sclera, choroid and retina, the maximum difference observed was 4%, presenting the heterogeneous model higher mean dose values. For the other eye regions, the higher doses were obtained when the homogeneous water eye model is taken into consideration. Mean dose distributions determined for the homogeneous water eye model are similar to those obtained for the

  7. Determining profile of dose distribution for PD-103 brachytherapy source

    International Nuclear Information System (INIS)

    Berkay, Camgoz; Mehmet, N. Kumru; Gultekin, Yegin

    2006-01-01

    Full text: Brachytherapy is a particular radiotherapy for cancer treatments. By destructing cancerous cells using radiation, the treatment proceeded. When alive tissues are subject it is hazardous to study experimental. For brachytherapy sources generally are studied as theoretical using computer simulation. General concept of the treatment is to locate the radioactive source into cancerous area of related tissue. In computer studies Monte Carlo mathematical method that is in principle based on random number generations, is used. Palladium radioisotope is LDR (Low radiation Dose Rate) source. Main radioactive material was coated with titanium cylinder with 3mm length, 0.25 mm radius. There are two parts of Pd-103 in the titanium cylinder. It is impossible to investigate differential effects come from two part as experimental. Because the source dimensions are small compared with measurement distances. So there is only simulation method. In dosimetric studies it is aimed to determine absorbed dose distribution in tissue as radial and angular. In nuclear physics it is obligation to use computer based methods for researchers. Radiation studies have hazards for scientist and people interacted with radiation. When hazard exceed over recommended limits or physical conditions are not suitable (long work time, non economical experiments, inadequate sensitivity of materials etc.) it is unavoidable to simulate works and experiments before practices of scientific methods in life. In medical area, usage of radiation is required computational work for cancer treatments. Some computational studies are routine in clinics and other studies have scientific development purposes. In brachytherapy studies there are significant differences between experimental measurements and theoretical (computer based) output data. Errors of data taken from experimental studies are larger than simulation values errors. In design of a new brachytherapy source it is important to consider detailed

  8. Quantum key distribution with an unknown and untrusted source

    Science.gov (United States)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2009-03-01

    The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).

  9. Application of the source term code package to obtain a specific source term for the Laguna Verde Nuclear Power Plant

    International Nuclear Information System (INIS)

    Souto, F.J.

    1991-06-01

    The main objective of the project was to use the Source Term Code Package (STCP) to obtain a specific source term for those accident sequences deemed dominant as a result of probabilistic safety analyses (PSA) for the Laguna Verde Nuclear Power Plant (CNLV). The following programme has been carried out to meet this objective: (a) implementation of the STCP, (b) acquisition of specific data for CNLV to execute the STCP, and (c) calculations of specific source terms for accident sequences at CNLV. The STCP has been implemented and validated on CDC 170/815 and CDC 180/860 main frames as well as on a Micro VAX 3800 system. In order to get a plant-specific source term, data on the CNLV including initial core inventory, burn-up, primary containment structures, and materials used for the calculations have been obtained. Because STCP does not explicitly model containment failure, dry well failure in the form of a catastrophic rupture has been assumed. One of the most significant sequences from the point of view of possible off-site risk is the loss of off-site power with failure of the diesel generators and simultaneous loss of high pressure core spray and reactor core isolation cooling systems. The probability for that event is approximately 4.5 x 10 -6 . This sequence has been analysed in detail and the release fractions of radioisotope groups are given in the full report. 18 refs, 4 figs, 3 tabs

  10. The European source term code ESTER - basic ideas and tools for coupling of ATHLET and ESTER

    International Nuclear Information System (INIS)

    Schmidt, F.; Schuch, A.; Hinkelmann, M.

    1993-04-01

    The French software house CISI and IKE of the University of Stuttgart have developed during 1990 and 1991 in the frame of the Shared Cost Action Reactor Safety the informatic structure of the European Source TERm Evaluation System (ESTER). Due to this work tools became available which allow to unify on an European basis both code development and code application in the area of severe core accident research. The behaviour of reactor cores is determined by thermal hydraulic conditions. Therefore for the development of ESTER it was important to investigate how to integrate thermal hydraulic code systems with ESTER applications. This report describes the basic ideas of ESTER and improvements of ESTER tools in view of a possible coupling of the thermal hydraulic code system ATHLET and ESTER. Due to the work performed during this project the ESTER tools became the most modern informatic tools presently available in the area of severe accident research. A sample application is given which demonstrates the use of the new tools. (orig.) [de

  11. GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit

    International Nuclear Information System (INIS)

    Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik

    2014-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)

  12. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  13. Fiber optic distributed temperature sensing for fire source localization

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong

    2017-08-01

    A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.

  14. Supply and distribution for γ-ray sources

    International Nuclear Information System (INIS)

    Yamamoto, Takeo

    1997-01-01

    Japan Atomic energy Research Institute (JAERI) is the only facility to supply and distribute radioisotopes (RI) in Japan. The γ-ray sources for medical use are 192 Ir and 169 Yb for non-destructive examination and 192 Ir, 198 Au and 153 Gd for clinical use. All of these demands in Japan are supplied with domestic products at present. Meanwhile, γ-ray sources imported are 60 Co sources for medical and industrial uses including sterilization of medical instruments, 137 Cs for irradiation to blood and 241 Am for industrial measurements. The major overseas suppliers are Nordion International Inc. and Amersham International plc. RI products on the market are divided into two groups; one is the primary products which are supplied in liquid or solid after chemical or physical treatments of radioactive materials obtained from reactor and the other is the secondary product which is a final product after various processing. Generally these secondary products are used in practice. In Japan, both of the domestic and imported products are supplied to the users via JRIA (Japan Radioisotope Association). The association participates in the sales and the distributions of the secondary products and also in the processings of the primary ones to their sealed sources. Furthermore, stable supplying systems for these products are almost established according to the half life of each nuclide only if there is no accident in the reactor. (M.N.)

  15. Theoretical and experimental study of the electron distribution function in the plasma of an electron cyclotron resonance ion source

    International Nuclear Information System (INIS)

    Girard, A.; Perret, C.; Bourg, F.; Khodja, H.; Melin, G.; Lecot, C.

    1997-01-01

    Electron Cyclotron Resonance Ion Sources (ECRIS) are mirror machines which can deliver important fluxes of Highly Charged Ions (HCI). These performances are strongly correlated with hot electrons sustained by an RF wave. This paper presents an analysis of the EDF in an ECR source. In the first part of the paper a one-dimensional Fokker-Planck code for the Electron Distribution Function is presented: this code includes a quasilinear diffusion operator for the RF wave, a collision term and a source term due to electron impact ionization. The present status of this code is presented. In the second part of the paper experiments related to the measurement of the EDF are presented: electron density, diamagnetism, electron endloss current have been measured at the Quadrumafios ECRIS. With these results it is possible to give a precise description of the EDF. (author)

  16. Production, Distribution, and Applications of Californium-252 Neutron Sources

    International Nuclear Information System (INIS)

    Balo, P.A.; Knauer, J.B.; Martin, R.C.

    1999-01-01

    The radioisotope 252 Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-year half-life. A source the size of a person's little finger can emit up to 10 11 neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells 252 Cf to commercial reencapsulators domestically and internationally. Sealed 252 Cf sources are also available for loan to agencies and subcontractors of the U.S. government and to universities for educational, research, and medical applications. The REDC has established the Californium User Facility (CUF) for Neutron Science to make its large inventory of 252 Cf sources available to researchers for irradiations inside uncontaminated hot cells. Experiments at the CUF include a land mine detection system, neutron damage testing of solid-state detectors, irradiation of human cancer cells for boron neutron capture therapy experiments, and irradiation of rice to induce genetic mutations

  17. Distribution and Source Identification of Pb Contamination in industrial soil

    Science.gov (United States)

    Ko, M. S.

    2017-12-01

    INTRODUCTION Lead (Pb) is toxic element that induce neurotoxic effect to human, because competition of Pb and Ca in nerve system. Lead is classified as a chalophile element and galena (PbS) is the major mineral. Although the Pb is not an abundant element in nature, various anthropogenic source has been enhanced Pb enrichment in the environment after the Industrial Revolution. The representative anthropogenic sources are batteries, paint, mining, smelting, and combustion of fossil fuel. Isotope analysis widely used to identify the Pb contamination source. The Pb has four stable isotopes that are 208Pb, 207Pb, 206Pb, and 204Pb in natural. The Pb is stable isotope and the ratios maintain during physical and chemical fractionation. Therefore, variations of Pb isotope abundance and relative ratios could imply the certain Pb contamination source. In this study, distributions and isotope ratios of Pb in industrial soil were used to identify the Pb contamination source and dispersion pathways. MATERIALS AND METHODS Soil samples were collected at depth 0­-6 m from an industrial area in Korea. The collected soil samples were dried and sieved under 2 mm. Soil pH, aqua-regia digestion and TCLP carried out using sieved soil sample. The isotope analysis was carried out to determine the abundance of Pb isotope. RESULTS AND DISCUSSION The study area was developed land for promotion of industrial facilities. The study area was forest in 1980, and the satellite image show the alterations of land use with time. The variations of land use imply the possibilities of bringing in external contaminated soil. The Pb concentrations in core samples revealed higher in lower soil compare with top soil. Especially, 4 m soil sample show highest Pb concentrations that are approximately 1500 mg/kg. This result indicated that certain Pb source existed at 4 m depth. CONCLUSIONS This study investigated the distribution and source identification of Pb in industrial soil. The land use and Pb

  18. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  19. CMP reflection imaging via interferometry of distributed subsurface sources

    Science.gov (United States)

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  20. Study on the properties of infrared wavefront coding athermal system under several typical temperature gradient distributions

    Science.gov (United States)

    Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua

    2018-01-01

    Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.

  1. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    Science.gov (United States)

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over

  2. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    Directory of Open Access Journals (Sweden)

    Almeida Jonas S

    2006-03-01

    Full Text Available Abstract Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else. Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web

  3. Chronos sickness: digital reality in Duncan Jones’s Source Code

    Directory of Open Access Journals (Sweden)

    Marcia Tiemy Morita Kawamoto

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.

  4. Reliability of Calderbank-Shor-Steane codes and security of quantum key distribution

    International Nuclear Information System (INIS)

    Hamada, Mitsuru

    2004-01-01

    After Mayers (1996 Advances in Cryptography: Proc. Crypto'96 pp 343-57; 2001 J. Assoc. Comput. Mach. 48 351-406) gave a proof of the security of the Bennett-Brassard (1984 Proc. IEEE Int. Conf. on Computers, Systems and Signal Processing (Bangalore, India) pp 175-9) (BB84) quantum key distribution protocol, Shor and Preskill (2000 Phys. Rev. Lett. 85 441-4) made a remarkable observation that a Calderbank-Shor-Steane (CSS) code had been implicitly used in the BB84 protocol, and suggested its security could be proved by bounding the fidelity, say F n , of the incorporated CSS code of length n in the form 1-F n ≤ exp[-nE + o(n)] for some positive number E. This work presents such a number E = E(R) as a function of the rate of codes R, and a threshold R 0 such that E(R) > 0 whenever R 0 , which is larger than the achievable rate based on the Gilbert-Varshamov bound that is essentially given by Shor and Preskill. The codes in the present work are robust against fluctuations of channel parameters, which fact is needed to establish the security rigorously and was not proved for rates above the Gilbert-Varshamov rate before in the literature. As a byproduct, the security of a modified BB84 protocol against any joint (coherent) attacks is proved quantitatively

  5. A UML profile for code generation of component based distributed systems

    International Nuclear Information System (INIS)

    Chiozzi, G.; Karban, R.; Andolfato, L.; Tejeda, A.

    2012-01-01

    A consistent and unambiguous implementation of code generation (model to text transformation) from UML (must rely on a well defined UML (Unified Modelling Language) profile, customizing UML for a particular application domain. Such a profile must have a solid foundation in a formally correct ontology, formalizing the concepts and their relations in the specific domain, in order to avoid a maze or set of wildly created stereotypes. The paper describes a generic profile for the code generation of component based distributed systems for control applications, the process to distill the ontology and define the profile, and the strategy followed to implement the code generator. The main steps that take place iteratively include: defining the terms and relations with an ontology, mapping the ontology to the appropriate UML meta-classes, testing the profile by creating modelling examples, and generating the code. This has allowed us to work on the modelling of E-ELT (European Extremely Large Telescope) control system and instrumentation without knowing what infrastructure will be finally used

  6. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    Science.gov (United States)

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  7. Surveying Multidisciplinary Aspects in Real-Time Distributed Coding for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Carlo Braccini

    2015-01-01

    Full Text Available Wireless Sensor Networks (WSNs, where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, “real-time” coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure, under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s and decoder(s possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors’ own research, and some highlights on the inter-play of the different theories.

  8. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    Science.gov (United States)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  9. Dynamic Allocation and Efficient Distribution of Data Among Multiple Clouds Using Network Coding

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2014-01-01

    Distributed storage has attracted large interest lately from both industry and researchers as a flexible, cost-efficient, high performance, and potentially secure solution for geographically distributed data centers, edge caching or sharing storage among users. This paper studies the benefits...... of random linear network coding to exploit multiple commercially available cloud storage providers simultaneously with the possibility to constantly adapt to changing cloud performance in order to optimize data retrieval times. The main contribution of this paper is a new data distribution mechanisms...... that cleverly stores and moves data among different clouds in order to optimize performance. Furthermore, we investigate the trade-offs among storage space, reliability and data retrieval speed for our proposed scheme. By means of real-world implementation and measurements using well-known and publicly...

  10. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  11. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  12. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  13. Coded aperture detector for high precision gamma-ray burst source locations

    International Nuclear Information System (INIS)

    Helmken, H.; Gorenstein, P.

    1977-01-01

    Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented

  14. Extension of ANISN and DOT 3.5 transport computer codes to calculate heat generation by radiation and temperature distribution in nuclear reactors

    International Nuclear Information System (INIS)

    Torres, L.M.R.; Gomes, I.C.; Maiorino, J.R.

    1986-01-01

    The ANISN and DOT 3.5 codes solve the transport equation using the discrete ordinate method, in one and two-dimensions, respectively. The objectives of the study were to modify these two codes, frequently used in reactor shielding problems, to include nuclear heating calculations due to the interaction of neutrons and gamma-rays with matter. In order to etermine the temperature distribution, a numerical algorithm was developed using the finite difference method to solve the heat conduction equation, in one and two-dimensions, considering the nuclear heating from neutron and gamma-rays, as the source term. (Author) [pt

  15. Novel UEP LT Coding Scheme with Feedback Based on Different Degree Distributions

    Directory of Open Access Journals (Sweden)

    Li Ya-Fang

    2016-01-01

    Full Text Available Traditional unequal error protection (UEP schemes have some limitations and problems, such as the poor UEP performance of high priority data and the seriously sacrifice of low priority data in decoding property. Based on the reasonable applications of different degree distributions in LT codes, this paper puts forward a novel UEP LT coding scheme with a simple feedback to compile these data packets separately. Simulation results show that the proposed scheme can effectively protect high priority data, and improve the transmission efficiency of low priority data from 2.9% to 22.3%. Furthermore, it is fairly suitable to apply this novel scheme to multicast and broadcast environments since only a simple feedback introduced.

  16. PRIMUS: a computer code for the preparation of radionuclide ingrowth matrices from user-specified sources

    International Nuclear Information System (INIS)

    Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.

    1984-10-01

    The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references

  17. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    Science.gov (United States)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  18. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  19. Large-eddy simulation of convective boundary layer generated by highly heated source with open source code, OpenFOAM

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Eguchi, Yuzuru; Sano, Tadashi; Shirai, Koji; Ishihara, Shuji

    2011-01-01

    Spatial- and temporal-characteristics of turbulence structures in the close vicinity of a heat source, which is a horizontal upward-facing round plate heated at high temperature, are examined by using well resolved large-eddy simulations. The verification is carried out through the comparison with experiments: the predicted statistics, including the PDF distribution of temperature fluctuations, agree well with measurements, indicating that the present simulations have a capability to appropriately reproduce turbulence structures near the heat source. The reproduced three-dimensional thermal- and fluid-fields in the close vicinity of the heat source reveals developing processes of coherence structures along the surface: the stationary- and streaky-flow patterns appear near the edge, and such patterns randomly shift to cell-like patterns with incursion into the center region, resulting in thermal-plume meandering. Both the patterns have very thin structures, but the depth of streaky structure is considerably small compared with that of cell-like patterns; this discrepancy causes the layered structures. The structure is the source of peculiar turbulence characteristics, the prediction of which is quite difficult with RANS-type turbulence models. The understanding such structures obtained in present study must be helpful to improve the turbulence model used in nuclear engineering. (author)

  20. DUSTMS-D: DISPOSAL UNIT SOURCE TERM - MULTIPLE SPECIES - DISTRIBUTED FAILURE DATA INPUT GUIDE.

    Energy Technology Data Exchange (ETDEWEB)

    SULLIVAN, T.M.

    2006-01-01

    Performance assessment of a low-level waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). Many of these physical processes are influenced by the design of the disposal facility (e.g., how the engineered barriers control infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This has been done and the resulting models have been incorporated into the computer code DUST-MS (Disposal Unit Source Term-Multiple Species). The DUST-MS computer code is designed to model water flow, container degradation, release of contaminants from the wasteform to the contacting solution and transport through the subsurface media. Water flow through the facility over time is modeled using tabular input. Container degradation models include three types of failure rates: (a) instantaneous (all containers in a control volume fail at once), (b) uniformly distributed failures (containers fail at a linear rate between a specified starting and ending time), and (c) gaussian failure rates (containers fail at a rate determined by a mean failure time, standard deviation and gaussian distribution). Wasteform release models include four release mechanisms: (a) rinse with partitioning (inventory is released instantly upon container failure subject to equilibrium partitioning (sorption) with

  1. Steady state thermal hydraulic analysis of a boiling water reactor core, for various power distributions, using computer code THABNA

    International Nuclear Information System (INIS)

    Venkat Raj, V.; Saha, D.

    1976-01-01

    The core of a boiling water reactor may see different power distributions during its operational life. How some of the typical power distributions affect some of the thermal hydraulic parameters such as pressure drop minimum critical heat flux ratio, void distribution etc. has been studied using computer code THABNA. The effect of an increase in the leakage flow has also been analysed. (author)

  2. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    Science.gov (United States)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  3. Code of practice for the control and safe handling of radioactive sources used for therapeutic purposes (1988)

    International Nuclear Information System (INIS)

    1988-01-01

    This Code is intended as a guide to safe practices in the use of sealed and unsealed radioactive sources and in the management of patients being treated with them. It covers the procedures for the handling, preparation and use of radioactive sources, precautions to be taken for patients undergoing treatment, storage and transport of radioactive sources within a hospital or clinic, and routine testing of sealed sources [fr

  4. A Source Term Calculation for the APR1400 NSSS Auxiliary System Components Using the Modified SHIELD Code

    International Nuclear Information System (INIS)

    Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee

    2005-01-01

    The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components

  5. Coordination analysis of players' distribution in football using cross-correlation and vector coding techniques.

    Science.gov (United States)

    Moura, Felipe Arruda; van Emmerik, Richard E A; Santana, Juliana Exel; Martins, Luiz Eduardo Barreto; Barros, Ricardo Machado Leite de; Cunha, Sergio Augusto

    2016-12-01

    The purpose of this study was to investigate the coordination between teams spread during football matches using cross-correlation and vector coding techniques. Using a video-based tracking system, we obtained the trajectories of 257 players during 10 matches. Team spread was calculated as functions of time. For a general coordination description, we calculated the cross-correlation between the signals. Vector coding was used to identify the coordination patterns between teams during offensive sequences that ended in shots on goal or defensive tackles. Cross-correlation showed that opponent teams have a tendency to present in-phase coordination, with a short time lag. During offensive sequences, vector coding results showed that, although in-phase coordination dominated, other patterns were observed. We verified that during the early stages, offensive sequences ending in shots on goal present greater anti-phase and attacking team phase periods, compared to sequences ending in tackles. Results suggest that the attacking team may seek to present a contrary behaviour of its opponent (or may lead the adversary behaviour) in the beginning of the attacking play, regarding to the distribution strategy, to increase the chances of a shot on goal. The techniques allowed detecting the coordination patterns between teams, providing additional information about football dynamics and players' interaction.

  6. Developing open-source codes for electromagnetic geophysics using industry support

    Science.gov (United States)

    Key, K.

    2017-12-01

    Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.

  7. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  8. Sparsey^TM: Spatiotemporal Event Recognition via Deep Hierarchical Sparse Distributed Codes

    Directory of Open Access Journals (Sweden)

    Gerard J Rinkus

    2014-12-01

    Full Text Available The visual cortex’s hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes in each representational field (which we equate with the cortical macrocolumn, mac, at each level. In localism, each represented feature/concept/event (hereinafter item is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac’s units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model’s core algorithm, which does both storage and retrieval (inference, makes a single pass over all macs on each time step, the overall model’s storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (Big Data problems. A 2010 paper described a non-hierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level, describing novel model principles like progressive critical periods, dynamic modulation of principal cells’ activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of

  9. Distributed Space-Time Block Coded Transmission with Imperfect Channel Estimation: Achievable Rate and Power Allocation

    Directory of Open Access Journals (Sweden)

    Sonia Aïssa

    2008-05-01

    Full Text Available This paper investigates the effects of channel estimation error at the receiver on the achievable rate of distributed space-time block coded transmission. We consider that multiple transmitters cooperate to send the signal to the receiver and derive lower and upper bounds on the mutual information of distributed space-time block codes (D-STBCs when the channel gains and channel estimation error variances pertaining to different transmitter-receiver links are unequal. Then, assessing the gap between these two bounds, we provide a limiting value that upper bounds the latter at any input transmit powers, and also show that the gap is minimum if the receiver can estimate the channels of different transmitters with the same accuracy. We further investigate positioning the receiving node such that the mutual information bounds of D-STBCs and their robustness to the variations of the subchannel gains are maximum, as long as the summation of these gains is constant. Furthermore, we derive the optimum power transmission strategy to achieve the outage capacity lower bound of D-STBCs under arbitrary numbers of transmit and receive antennas, and provide closed-form expressions for this capacity metric. Numerical simulations are conducted to corroborate our analysis and quantify the effects of imperfect channel estimation.

  10. Calculation of pellet radial power distributions with a Monte Carlo burnup code

    International Nuclear Information System (INIS)

    Suzuki, Motomu; Yamamoto, Toru; Nakata, Tetsuo

    2010-01-01

    The Japan Nuclear Energy Safety Organization (JNES) has been working on an irradiation test program of high-burnup MOX fuel at Halden Boiling Water Reactor (HBWR). MOX and UO 2 fuel rods had been irradiated up to about 64 GWd/t (rod avg.) as a Japanese utilities research program (1st phase), and using those fuel rods, in-situ measurement of fuel pellet centerline temperature was done during the 2nd phase of irradiation as the JNES test program. As part of analysis of the temperature data, power distributions in a pellet radial direction were analyzed by using a Monte Carlo burnup code MVP-BURN. In addition, the calculated results of deterministic burnup codes SRAC and PLUTON for the same problem were compared with those of MVP-BURN to evaluate their accuracy. Burnup calculations with an assembly model were performed by using MVP-BURN and those with a pin cell model by using SRAC and PLUTON. The cell pitch and, therefore, fuel to moderator ratio in the pin cell calculation was determined from the comparison of neutron energy spectra with those of MVP-BURN. The fuel pellet radial distributions of burnup and fission reaction rates at the end of the 1st phase irradiation were compared between the three codes. The MVP-BURN calculation results show a large peaking in the burnup and fission rates in the pellet outer region for the UO 2 and MOX pellets. The SRAC calculations give very close results to those of the MVP-BURN. On the other hand, the PLUTON calculations show larger burnup for the UO 2 and lower burnup for the MOX pellets in the pellet outer region than those of MVP-BURN, which lead to larger fission rates for the UO 2 and lower fission rates for the MOX pellets, respectively. (author)

  11. Agent paradigm and services technology for distributed Information Sources

    Directory of Open Access Journals (Sweden)

    Hakima Mellah

    2011-10-01

    Full Text Available The complexity of information is issued from interacting information sources (IS, and could be better exploited with respect to relevance of information. In distributed IS system, relevant information has a content that is in connection with other contents in information network, and is used for a certain purpose. The highlighting point of the proposed model is to contribute to information system agility according to a three-dimensional view involving the content, the use and the structure. This reflects the relevance of information complexity and effective methodologies through self organized principle to manage the complexity. This contribution is primarily focused on presenting some factors that lead and trigger for self organization in a Service Oriented Architecture (SOA and how it can be possible to integrate self organization mechanism in the same.

  12. Living Up to the Code's Exhortations? Social Workers' Political Knowledge Sources, Expectations, and Behaviors.

    Science.gov (United States)

    Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan

    2016-01-01

    The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed.

  13. Simulations of hydrogen distribution experiments using the PRESCON2 and GOTHIC codes

    International Nuclear Information System (INIS)

    Nguyen, T.H.; Collins, W.M.

    1994-01-01

    The main objective of this work is to develop modelling guidelines in the use of containment models to more accurately predict hydrogen distribution in the HDR facility and to assess the ability of both lumped and distributed parameter models in predicting natural convective flows within containment. Experiences learned from this exercise will be applied to present methodologies used in licensing analyses for CANDU containments. PRESCON2 simulations of hydrogen distribution experiments performed in the HDR facility show hydrogen and helium concentrations are under-predicted at high elevations and over predicted at low elevations. Acceptable predictions of the gas concentration are obtained in the vicinity of the release. Results obtained from GOTHIC simulations using lumped parameter models are very comparable to those predicted by PRESCON2. This indicates that lumped parameter codes tend to over-estimate the degree of mixing of fluids due to the inherent nodal atmospheric homogeneity assumption in their numerical formulation. Results obtained from the GOTHIC simulation using a simple distributed parameter model show little improvement compared to those predicted using the lumped parameter model. This indicates that a simple 3-D model will not be sufficient to make significant improvements in the results. More detailed modelling of the junction flows and finer grids should lead to more accurate results. More detailed investigations employing finer 3-D meshes is under investigation. (author)

  14. RIES - Rijnland Internet Election System: A Cursory Study of Published Source Code

    Science.gov (United States)

    Gonggrijp, Rop; Hengeveld, Willem-Jan; Hotting, Eelco; Schmidt, Sebastian; Weidemann, Frederik

    The Rijnland Internet Election System (RIES) is a system designed for voting in public elections over the internet. A rather cursory scan of the source code to RIES showed a significant lack of security-awareness among the programmers which - among other things - appears to have left RIES vulnerable to near-trivial attacks. If it had not been for independent studies finding problems, RIES would have been used in the 2008 Water Board elections, possibly handling a million votes or more. While RIES was more extensively studied to find cryptographic shortcomings, our work shows that more down-to-earth secure design practices can be at least as important, and the aspects need to be examined much sooner than right before an election.

  15. Spatial distribution of carbon sources and sinks in Canada's forests

    International Nuclear Information System (INIS)

    Chen, Jing M.; Weimin, Ju; Liu, Jane; Cihlar, Josef; Chen, Wenjun

    2003-01-01

    Annual spatial distributions of carbon sources and sinks in Canada's forests at 1 km resolution are computed for the period from 1901 to 1998 using ecosystem models that integrate remote sensing images, gridded climate, soils and forest inventory data. GIS-based fire scar maps for most regions of Canada are used to develop a remote sensing algorithm for mapping and dating forest burned areas in the 25 yr prior to 1998. These mapped and dated burned areas are used in combination with inventory data to produce a complete image of forest stand age in 1998. Empirical NPP age relationships were used to simulate the annual variations of forest growth and carbon balance in 1 km pixels, each treated as a homogeneous forest stand. Annual CO 2 flux data from four sites were used for model validation. Averaged over the period 1990-1998, the carbon source and sink map for Canada's forests show the following features: (i) large spatial variations corresponding to the patchiness of recent fire scars and productive forests and (ii) a general south-to-north gradient of decreasing carbon sink strength and increasing source strength. This gradient results mostly from differential effects of temperature increase on growing season length, nutrient mineralization and heterotrophic respiration at different latitudes as well as from uneven nitrogen deposition. The results from the present study are compared with those of two previous studies. The comparison suggests that the overall positive effects of non-disturbance factors (climate, CO 2 and nitrogen) outweighed the effects of increased disturbances in the last two decades, making Canada's forests a carbon sink in the 1980s and 1990s. Comparisons of the modeled results with tower-based eddy covariance measurements of net ecosystem exchange at four forest stands indicate that the sink values from the present study may be underestimated

  16. Impact of Distributed Generation Grid Code Requirements on Islanding Detection in LV Networks

    Directory of Open Access Journals (Sweden)

    Fabio Bignucolo

    2017-01-01

    Full Text Available The recent growing diffusion of dispersed generation in low voltage (LV distribution networks is entailing new rules to make local generators participate in network stability. Consequently, national and international grid codes, which define the connection rules for stability and safety of electrical power systems, have been updated requiring distributed generators and electrical storage systems to supply stabilizing contributions. In this scenario, specific attention to the uncontrolled islanding issue has to be addressed since currently required anti-islanding protection systems, based on relays locally measuring voltage and frequency, could no longer be suitable. In this paper, the effects on the interface protection performance of different LV generators’ stabilizing functions are analysed. The study takes into account existing requirements, such as the generators’ active power regulation (according to the measured frequency and reactive power regulation (depending on the local measured voltage. In addition, the paper focuses on other stabilizing features under discussion, derived from the medium voltage (MV distribution network grid codes or proposed in the literature, such as fast voltage support (FVS and inertia emulation. Stabilizing functions have been reproduced in the DIgSILENT PowerFactory 2016 software environment, making use of its native programming language. Later, they are tested both alone and together, aiming to obtain a comprehensive analysis on their impact on the anti-islanding protection effectiveness. Through dynamic simulations in several network scenarios the paper demonstrates the detrimental impact that such stabilizing regulations may have on loss-of-main protection effectiveness, leading to an increased risk of unintentional islanding.

  17. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    Science.gov (United States)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  18. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  19. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  20. Hydrogen distribution analysis for CANDU 6 containment using the GOTHIC containment analysis code

    International Nuclear Information System (INIS)

    Nguyen, T.H.; Collins, W.M.

    1995-01-01

    Hydrogen may be generated in the reactor core by the zircaloy-steam reaction for a postulated loss of coolant accident (LOCA) scenario with loss of emergency core cooling (ECC). It is important to predict hydrogen distribution within containment in order to determine if flammable mixtures exist. This information is required to determine the best locations in containment for the placement of mitigation devices such as igniters and recombiners. For large break loss coolant accidents, hydrogen is released after the break flow has subsided. Following this period of high discharge the flow in the containment building undergoes transition from forced flow to a buoyancy driven flow (particularly when local air coolers (LACS) are not credited). One-dimensional computer codes (lumped parameter) are applicable during the initial period when a high degree of mixing occurs due to the forced flow generated by the break. However, during the post-blowdown phase the assumption of homogeneity becomes less accurate, and it is necessary to employ three-dimensional codes to capture local effects. This is particularly important for purely buoyant flows which may exhibit stratification effects. In the present analysis a three-dimensional model of CANDU 6 containment was constructed with the GOTHIC computer code using a relatively coarse mesh adequate enough to capture the salient features of the flow during the blowdown and hydrogen release periods. A 3D grid representation was employed for that portion of containment in which the primary flow (LOCA and post-LOCA) was deemed to occur. The remainder of containment was represented by lumped nodes. The results of the analysis indicate that flammable concentrations exist for several minutes in the vicinity of the break and in the steam generator enclosure. This is due to the fact that the hydrogen released from the break is primarily directed upwards into the steam generator enclosure due to buoyancy effects. Once hydrogen production ends

  1. Calculation Of Fuel Burnup And Radionuclide Inventory In The Syrian Miniature Neutron Source Reactor Using The GETERA Code

    International Nuclear Information System (INIS)

    Khattab, K.; Dawahra, S.

    2011-01-01

    Calculations of the fuel burnup and radionuclide inventory in the Syrian Miniature Neutron Source Reactor (MNSR) after 10 years (the reactor core expected life) of the reactor operation time are presented in this paper using the GETERA code. The code is used to calculate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnup and plutonium produced in the reactor core, the concentrations and radionuclides of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core were calculated using the GETERA code as well. It is found that the GETERA code is better than the WIMSD4 code for the fuel burnup calculation in the MNSR reactor since it is newer and has a bigger library of isotopes and more accurate. (author)

  2. Improved Side Information Generation for Distributed Video Coding by Exploiting Spatial and Temporal Correlations

    Directory of Open Access Journals (Sweden)

    Ye Shuiming

    2009-01-01

    Full Text Available Distributed video coding (DVC is a video coding paradigm allowing low complexity encoding for emerging applications such as wireless video surveillance. Side information (SI generation is a key function in the DVC decoder, and plays a key-role in determining the performance of the codec. This paper proposes an improved SI generation for DVC, which exploits both spatial and temporal correlations in the sequences. Partially decoded Wyner-Ziv (WZ frames, based on initial SI by motion compensated temporal interpolation, are exploited to improve the performance of the whole SI generation. More specifically, an enhanced temporal frame interpolation is proposed, including motion vector refinement and smoothing, optimal compensation mode selection, and a new matching criterion for motion estimation. The improved SI technique is also applied to a new hybrid spatial and temporal error concealment scheme to conceal errors in WZ frames. Simulation results show that the proposed scheme can achieve up to 1.0 dB improvement in rate distortion performance in WZ frames for video with high motion, when compared to state-of-the-art DVC. In addition, both the objective and perceptual qualities of the corrupted sequences are significantly improved by the proposed hybrid error concealment scheme, outperforming both spatial and temporal concealments alone.

  3. Distribution and chemical coding of neurons in intramural ganglia of the porcine urinary bladder trigone.

    Directory of Open Access Journals (Sweden)

    Zenon Pidsudko

    2004-03-01

    Full Text Available This study presents the distribution and chemical coding of neurons in the porcine intramural ganglia of the urinary bladder trigone (IG-UBT demonstrated using combined retrograde tracing and double-labelling immunohistochemistry. Retrograde fluorescent tracer Fast Blue (FB was injected into the wall of both the left and right side of the bladder trigone during laparotomy performed under pentobarbital anaesthesia. Ten-microm-thick cryostat sections were processed for double-labelling immunofluorescence with antibodies against tyrosine hydroxylase (TH, dopamine beta-hydroxylase (DBH, neuropeptide Y (NPY, somatostatin (SOM, galanin (GAL, vasoactive intestinal polypeptide (VIP, nitric oxide synthase (NOS, calcitonin gene-related peptide (CGRP, substance P (SP, Leu5-enkephalin (LENK and choline acetyltransferase (ChAT. IG-UBT neurons formed characteristic clusters (from a few to tens neuronal cells found under visceral peritoneum or in the outer muscular layer. Immunohistochemistry revealed four main populations of IG-UBT neurons: SOM- (ca. 35%, SP- (ca. 32%, ChAT- and NPY- immunoreactive (-IR (ca. 23% as well as non-adrenergic non-cholinergic nerve cells (ca. 6%. This study has demonstrated a relatively large population of differently coded IG-UBT neurons, which constitute an important element of the complex neuro-endocrine system involved in the regulation of the porcine urogenital organ function.

  4. Web Application to Monitor Logistics Distribution of Disaster Relief Using the CodeIgniter Framework

    Science.gov (United States)

    Jamil, Mohamad; Ridwan Lessy, Mohamad

    2018-03-01

    Disaster management is the responsibility of the central government and local governments. The principles of disaster management, among others, are quick and precise, priorities, coordination and cohesion, efficient and effective manner. Help that is needed by most societies are logistical assistance, such as the assistance covers people’s everyday needs, such as food, instant noodles, fast food, blankets, mattresses etc. Logistical assistance is needed for disaster management, especially in times of disasters. The support of logistical assistance must be timely, to the right location, target, quality, quantity, and needs. The purpose of this study is to make a web application to monitorlogistics distribution of disaster relefusing CodeIgniter framework. Through this application, the mechanisms of aid delivery will be easily controlled from and heading to the disaster site.

  5. Web Application To Monitor Logistics Distribution of Disaster Relief Using the CodeIgniter Framework

    Directory of Open Access Journals (Sweden)

    Mohamad Jamil

    2017-10-01

    Full Text Available Disaster management is the responsibility of the central government and local governments. The principles of disaster management, among others, are quick and precise, priorities, coordination and cohesion, efficient and effective manner. Help that is needed by most societies are logistical assistance, such as the assistance covers people's everyday needs, such as food, instant noodles, fast food, blankets, mattresses etc. Logistical assistance is needed for disaster management, especially in times of disasters. The support of logistical assistance must be timely, to the right location, target, quality, quantity, and needs. The purpose of this study is to make a web application to monitorlogistics distribution of disaster relefusing CodeIgniter framework. Through this application, the mechanisms of aid delivery will be easily controlled from and heading to the disaster site

  6. Analysis of a distributed pulse power system using a circuit analysis code

    International Nuclear Information System (INIS)

    Hoeft, L.O.; BDM Corp., Albuquerque, NM)

    1979-01-01

    A sophisticated computer code (SCEPTRE), used to analyze electronic circuits, was used to evaluate the performance of a large flash x-ray machine. This device was considered to be a transmission line whose impedance varied with position. This distributed system was modeled by lumped parameter sections with time constants of 1 ns. The model was used to interpret voltage, current, and radiation measurements in terms of diode performance. The effects of tube impedance, diode model, switch behavior, and potential geometric modifications were determined. The principal conclusions were that, since radiation output depends strongly on voltage, diode impedance was much more important than the other parameters, and the charge voltage must be accurately known

  7. Power distribution and fuel depletion calculation for a PWR, using LEOPARD and CITATION codes

    International Nuclear Information System (INIS)

    Batista, J.L.

    1982-01-01

    By modifying LEOPARD a new program, LEOCIT, has been developed in which additional subroutines prepare cross-section libraries in 1, 2 or 4 energy groups and subsequently record these on disc or tape in a format appropriate for direct input to the CITATION code. Use of LEOCIT in conjunction with CITATION is demonstrated by simulating the first depletion cycle of Angra Unit 1. In these calculations two energy groups are used in 1/4, X - Y geometry to give the soluble boron curve, the fuel depletion and the point to point power distribution in Angra 1. Finally relevant results obtained here are compared with those published by Westinghouse, CNEN and Furnas and recommendations are made to improve the system of neutronic calculation developed in this work. (Author) [pt

  8. Random Linear Network Coding is Key to Data Survival in Highly Dynamic Distributed Storage

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2015-01-01

    Distributed storage solutions have become widespread due to their ability to store large amounts of data reliably across a network of unreliable nodes, by employing repair mechanisms to prevent data loss. Conventional systems rely on static designs with a central control entity to oversee...... and control the repair process. Given the large costs for maintaining and cooling large data centers, our work proposes and studies the feasibility of a fully decentralized systems that can store data even on unreliable and, sometimes, unavailable mobile devices. This imposes new challenges on the design...... as the number of available nodes varies greatly over time and keeping track of the system's state becomes unfeasible. As a consequence, conventional erasure correction approaches are ill-suited for maintaining data integrity. In this highly dynamic context, random linear network coding (RLNC) provides...

  9. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    Science.gov (United States)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  10. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  11. A versatile palindromic amphipathic repeat coding sequence horizontally distributed among diverse bacterial and eucaryotic microbes

    Directory of Open Access Journals (Sweden)

    Glass John I

    2010-07-01

    repeat may be disseminated by HGT and intra-genomic shuffling. Conclusions We describe novel features of PARCELs (Palindromic Amphipathic Repeat Coding ELements, a set of widely distributed repeat protein domains and coding sequences that were likely acquired through HGT by diverse unicellular microbes, further mobilized and diversified within genomes, and co-opted for expression in the membrane proteome of some taxa. Disseminated by multiple gene-centric vehicles, ORFs harboring these elements enhance accessory gene pools as part of the "mobilome" connecting genomes of various clades, in taxa sharing common niches.

  12. 99Tc in the environment. Sources, distribution and methods

    International Nuclear Information System (INIS)

    Garcia-Leon, Manuel

    2005-01-01

    99 Tc is a β-emitter, E max =294 keV, with a very long half-life (T 1/2 =2.11 x 10 5 y). It is mainly produced in the fission of 235 U and 239 Pu at a rate of about 6%. This rate together with its long half-life makes it a significant nuclide in the whole nuclear fuel cycle, from which it can be introduced into the environment at different rates depending on the cycle step. A gross estimation shows that adding all the possible sources, at least 2000 TBq had been released into the environment up to 2000 and that up to the middle of the nineties of the last century some 64000 TBq had been produced worldwide. Nuclear explosions have liberated some 160 TBq into the environment. In this work, environmental distribution of 99 Tc as well as the methods for its determination will be discussed. Emphasis is put on the environmental relevance of 99 Tc, mainly with regard to the future committed radiation dose received by the population and to the problem of nuclear waste management. Its determination at environmental levels is a challenging task. For that, special mention is made about the mass spectrometric methods for its measurement. (author)

  13. HELIOS: An Open-source, GPU-accelerated Radiative Transfer Code for Self-consistent Exoplanetary Atmospheres

    Science.gov (United States)

    Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin

    2017-02-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).

  14. CDFMC: a program that calculates the fixed neutron source distribution for a BWR using Monte Carlo

    International Nuclear Information System (INIS)

    Gomez T, A.M.; Xolocostli M, J.V.; Palacios H, J.C.

    2006-01-01

    The three-dimensional neutron flux calculation using the synthesis method, it requires of the determination of the neutron flux in two two-dimensional configurations as well as in an unidimensional one. Most of the standard guides for the neutron flux calculation or fluences in the vessel of a nuclear reactor, make special emphasis in the appropriate calculation of the fixed neutron source that should be provided to the used transport code, with the purpose of finding sufficiently approximated flux values. The reactor core assemblies configuration is based on X Y geometry, however the considered problem is solved in R θ geometry for what is necessary to make an appropriate mapping to find the source term associated to the R θ intervals starting from a source distribution in rectangular coordinates. To develop the CDFMC computer program (Source Distribution calculation using Monte Carlo), it was necessary to develop a theory of independent mapping to those that have been in the literature. The method of meshes overlapping here used, is based on a technique of random points generation, commonly well-known as Monte Carlo technique. Although the 'randomness' of this technique it implies considering errors in the calculations, it is well known that when increasing the number of points randomly generated to measure an area or some other quantity of interest, the precision of the method increases. In the particular case of the CDFMC computer program, the developed technique reaches a good general behavior when it is used a considerably high number of points (bigger or equal to a hundred thousand), with what makes sure errors in the calculations of the order of 1%. (Author)

  15. Development of a computer program to determine the pulse-height distribution in a gamma-ray detector from an arbitrary geometry source -feasibility study

    International Nuclear Information System (INIS)

    Currie, G.D.; Marshall, M.

    1989-03-01

    The feasibility of developing a computer program suitable for evaluating the pulse-height spectrum in a gamma-ray detector from a complex geometry source has been examined. A selection of relevant programs, Monte Carlo radiation transport codes, have been identified and their applicability to this study discussed. It is proposed that the computation be performed in two parts: the evaluation of the photon fluence at the detector using a photon transport code, and calculation of the pulse-height distribution from this spectrum using response functions determined with an electron-photon transport code. The two transport codes selected to perform this procedure are MCNP (Monte Carlo Neutron Photon code), and EGS4 (Electron Gamma Shower code). (Author)

  16. Superdense Coding with GHZ and Quantum Key Distribution with W in the ZX-calculus

    Directory of Open Access Journals (Sweden)

    Anne Hillebrand

    2012-10-01

    Full Text Available Quantum entanglement is a key resource in many quantum protocols, such as quantum teleportation and quantum cryptography. Yet entanglement makes protocols presented in Dirac notation difficult to verify. This is why Coecke and Duncan have introduced a diagrammatic language for quantum protocols, called the ZX-calculus. This diagrammatic notation is both intuitive and formally rigorous. It is a simple, graphical, high level language that emphasises the composition of systems and naturally captures the essentials of quantum mechanics. In the author's MSc thesis it has been shown for over 25 quantum protocols that the ZX-calculus provides a relatively easy and more intuitive presentation. Moreover, the author embarked on the task to apply categorical quantum mechanics on quantum security; earlier works did not touch anything but Bennett and Brassard's quantum key distribution protocol, BB84. Superdense coding with the Greenberger-Horne-Zeilinger state and quantum key distribution with the W-state are presented in the ZX-calculus in this paper.

  17. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  18. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  19. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    Science.gov (United States)

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  20. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  1. ALOAD - a code to determine the concentrated forces equivalent with a distributed pressure field for a FEM analysis

    Directory of Open Access Journals (Sweden)

    Nicolae APOSTOLESCU

    2010-12-01

    Full Text Available The main objective of this paper is to describe a code for calculating an equivalent systemof concentrate loads for a FEM analysis. The tables from the Aerodynamic Department containpressure field for a whole bearing surface, and integrated quantities both for the whole surface andfor fixed and mobile part. Usually in a FEM analysis the external loads as concentrated loadsequivalent to the distributed pressure field are introduced. These concentrated forces can also be usedin static tests. Commercial codes provide solutions for this problem, but what we intend to develop isa code adapted to the user’s specific needs.

  2. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  3. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    Science.gov (United States)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  4. Calculation of spatial distribution of the EURACOS II converter source

    International Nuclear Information System (INIS)

    Santo, A.C.F. de

    1985-01-01

    It is obtained the neutron spatial flux from the EURACOS (Enriched Uranium Converter Source) device, adjusted to experimental measures. The EURACOS device is a converter source which is constituted a circle plate of highly enriched uranium (90%). The converter provides an intense source of fast neutrons which has the energetic spectrum near to the fission spectrum. (M.C.K.) [pt

  5. Distribution and Sources of Black Carbon in the Arctic

    Science.gov (United States)

    Qi, Ling

    The Arctic is warming at twice the global rate over recent decades. To slow down this warming trend, there is growing interest in reducing the impact from short-lived climate forcers, such as black carbon (BC), because the benefits of mitigation are seen more quickly relative to CO2 reduction. To propose efficient mitigation policies, it is imperative to improve our understanding of BC distribution in the Arctic and to identify the sources. In this dissertation, we investigate the sensitivity of BC in the Arctic, including BC concentrations in snow (BCsnow) and BC concentrations in air (BCair), to emissions, dry deposition and wet scavenging using a global 3-D chemical transport model (CTM) GEOS-Chem. By including flaring emissions, estimating dry deposition velocity using resistance-in-series method, and including Wegener-Bergeron-Findeisen (WBF) in wet scavenging, simulated BCsnow in the eight Arctic sub-regions agree with the observations within a factor of two, and simulated BCair fall within the uncertainty range of observations. Specifically, we find that natural gas flaring emissions in Western Extreme North of Russia (WENR) strongly enhance BCsnow (by up to ?50%) and BCair (by 20-32%) during snow season in the so-called 'Arctic front', but has negligible impact on BC in the free troposphere. The updated dry deposition velocity over snow and ice is much larger than those used in most of global CTMs and agrees better with observation. The resulting BCsnow changes marginally because of the offsetting of higher dry and lower wet deposition fluxes. In contrast, surface BCair decreases strongly due to the faster dry deposition (by 27-68%). WBF occurs when the environmental vapor pressure is in between the saturation vapor pressure of ice crystals and water drops in mixed-phase clouds. As a result, water drops evaporate and releases BC particles in them back into the interstitial air. In most CTMs, WBF is either missing or represented by a uniform and low BC

  6. Proton absorbed dose distribution in human eye simulated by SRNA-2KG code

    International Nuclear Information System (INIS)

    Ilic, R. D.; Pavlovic, R.

    2004-01-01

    The model of Monte Carlo SRNA code is described together with some numerical experiments to show feasibility of this code to be used in proton therapy, especially for tree dimensional proton absorption dose calculation in human eye. (author) [sr

  7. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system.

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  8. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  9. Gaze strategies can reveal the impact of source code features on the cognitive load of novice programmers

    DEFF Research Database (Denmark)

    Wulff-Jensen, Andreas; Ruder, Kevin Vignola; Triantafyllou, Evangelia

    2018-01-01

    As shown by several studies, programmers’ readability of source code is influenced by its structural and the textual features. In order to assess the importance of these features, we conducted an eye-tracking experiment with programming students. To assess the readability and comprehensibility of...

  10. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  11. Neutron flux distribution inside the cylindrical core of minor excess of reactivity in the IPEN/MB-01 reactor and comparison with citation code and MCNP- 5 code

    Energy Technology Data Exchange (ETDEWEB)

    Aredes, Vitor Ottoni; Bitelli, Ulysses d' Utra; Mura, Luiz Ernesto C.; Santos, Diogo Feliciano dos; Lima, Ana Cecilia de Souza, E-mail: ubitelli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    This study aims to determine the distribution of thermal neutron flux in the IPEN/MB-01 nuclear reactor core assembled with cylindrical core configuration of minor excess of reactivity with 568 fuel rods (28 fuel rods in diameter). The thermal neutron flux at the positions of irradiation derive from the method of reaction rate using gold foils. The experiment consists in inserting gold activations foils with and without cadmium coverage (cadmium boxes with 0.0502 cm thickness) in several positions throughout the active core. After irradiation, activity induced by nuclear reaction rates over gold foils is assessed by gamma ray spectrometry using a high-purity germanium (HPGe) detector. Experimental results are compared to those derived from calculations performed using a three dimensional CITATION diffusion code and MCNP-5 code and a proper nuclear data library. While calculated neutron flux data shows good agreement with experimental values in regions with little disturbance in the neutron flux, also showing that in the region of the reflectors of neutrons and near the control rods, the diffusion theory is not very precise. The average value of thermal neutron flux obtained experimentally compared to the calculated value by CITATION code and MCNP-5 code respectively show a difference of 1.18% and 0.84% at a nuclear power level of 74.65 ± 3.28 % watts. The average measured value of thermal neutron flux is 4.10 10{sup 8} ± 5.25% n/cm{sup 2}s. (author)

  12. Neutron flux distribution inside the cylindrical core of minor excess of reactivity in the IPEN/MB-01 reactor and comparison with citation code and MCNP- 5 code

    International Nuclear Information System (INIS)

    Aredes, Vitor Ottoni; Bitelli, Ulysses d'Utra; Mura, Luiz Ernesto C.; Santos, Diogo Feliciano dos; Lima, Ana Cecilia de Souza

    2015-01-01

    This study aims to determine the distribution of thermal neutron flux in the IPEN/MB-01 nuclear reactor core assembled with cylindrical core configuration of minor excess of reactivity with 568 fuel rods (28 fuel rods in diameter). The thermal neutron flux at the positions of irradiation derive from the method of reaction rate using gold foils. The experiment consists in inserting gold activations foils with and without cadmium coverage (cadmium boxes with 0.0502 cm thickness) in several positions throughout the active core. After irradiation, activity induced by nuclear reaction rates over gold foils is assessed by gamma ray spectrometry using a high-purity germanium (HPGe) detector. Experimental results are compared to those derived from calculations performed using a three dimensional CITATION diffusion code and MCNP-5 code and a proper nuclear data library. While calculated neutron flux data shows good agreement with experimental values in regions with little disturbance in the neutron flux, also showing that in the region of the reflectors of neutrons and near the control rods, the diffusion theory is not very precise. The average value of thermal neutron flux obtained experimentally compared to the calculated value by CITATION code and MCNP-5 code respectively show a difference of 1.18% and 0.84% at a nuclear power level of 74.65 ± 3.28 % watts. The average measured value of thermal neutron flux is 4.10 10 8 ± 5.25% n/cm 2 s. (author)

  13. OFF, Open source Finite volume Fluid dynamics code: A free, high-order solver based on parallel, modular, object-oriented Fortran API

    Science.gov (United States)

    Zaghi, S.

    2014-07-01

    OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git

  14. Study of the distribution of steam plumes in the PANDA facility using CFD code

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shuanshuan [School of Physics and Engineering, Sun Yat-sen University, Guangzhou (China); Cai, Jiejin, E-mail: chiven77@hotmail.com [Sino-French Institute of Nuclear Engineering & Technology, Sun Yat-sen University, Guangzhou (China); Zhang, Huiyong [China Nuclear Power Technology Research Institute, Shenzhen 518026 (China); Yin, Huaqiang; Yang, Xingtuan [Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084 (China)

    2015-08-15

    Highlights: • The standard k–ε model has been verified for gas plume simulation in the large-scale volume. • The k–k{sub l}–ω model has been improved for gas plume simulations. • The sensitivity analyses about the computational mesh, time step, Froude numbers have been carried out. - Abstract: During a postulated severe accident in light water reactor, a large amount of steam is injected into containment through the break. This would lead to the increases of pressure and temperature, and consequently threaten the integrity of the containment. In this study the light gas (saturated steam) distribution in a large-scale multi-compartment volume is simulated by using CFD code. Several turbulence models, including the standard k–ε model, the k–k{sub l}–ω model, the transitional SST model, and the improved k–k{sub l}–ω model with considering buoyancy effect are used for the simulation. The results show that both the standard k–ε model and the improved k–k{sub l}–ω model with considering the buoyancy effect can get good results comparing to the experimental results. The improved k–k{sub l}–ω model can get much better than the original k–k{sub l}–ω model without considering the buoyancy effect for predicting the steam distribution in vessels, and some characteristics in concerned region are predicted well. The sensitivity analyses about the computational mesh, time step, Froude numbers are also carried out.

  15. A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry

    Science.gov (United States)

    Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb

    2013-01-01

    A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…

  16. VULCAN: An Open-source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Shang-Min; Grosheintz, Luc; Kitzmann, Daniel; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Lyons, James R. [Arizona State University, School of Earth and Space Exploration, Bateman Physical Sciences, Tempe, AZ 85287-1404 (United States); Rimmer, Paul B., E-mail: shang-min.tsai@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch, E-mail: jimlyons@asu.edu [University of St. Andrews, School of Physics and Astronomy, St. Andrews, KY16 9SS (United Kingdom)

    2017-02-01

    We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K, using a reduced C–H–O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer and Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. We also use VULCAN to examine the theoretical trends produced when the temperature–pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching approximation and find that it is accurate for methane but breaks down for acetylene, because the disequilibrium abundance of acetylene is not directly determined by transport-induced quenching, but is rather indirectly controlled by the disequilibrium abundance of methane. Therefore we suggest that the quenching approximation should be used with caution and must always be checked against a chemical kinetics calculation. A one-dimensional model atmosphere with 100 layers, computed using VULCAN, typically takes several minutes to complete. VULCAN is part of the Exoclimes Simulation Platform (ESP; exoclime.net) and publicly available at https://github.com/exoclime/VULCAN.

  17. The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding

    Directory of Open Access Journals (Sweden)

    Yuwei Cui

    2017-11-01

    Full Text Available Hierarchical temporal memory (HTM provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP. The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

  18. The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff

    2017-01-01

    Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

  19. The spatial distribution of fixed mutations within genes coding for proteins

    Science.gov (United States)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  20. The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data

    International Nuclear Information System (INIS)

    Ilic, Radovan D; Spasic-Jokic, Vesna; Belicev, Petar; Dragovic, Milos

    2005-01-01

    This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour

  1. Iterative Fusion of Distributed Decisions over the Gaussian Multiple-Access Channel Using Concatenated BCH-LDGM Codes

    Directory of Open Access Journals (Sweden)

    Crespo PedroM

    2011-01-01

    Full Text Available This paper focuses on the data fusion scenario where nodes sense and transmit the data generated by a source to a common destination, which estimates the original information from more accurately than in the case of a single sensor. This work joins the upsurge of research interest in this topic by addressing the setup where the sensed information is transmitted over a Gaussian Multiple-Access Channel (MAC. We use Low Density Generator Matrix (LDGM codes in order to keep the correlation between the transmitted codewords, which leads to an improved received Signal-to-Noise Ratio (SNR thanks to the constructive signal addition at the receiver front-end. At reception, we propose a joint decoder and estimator that exchanges soft information between the LDGM decoders and a data fusion stage. An error-correcting Bose, Ray-Chaudhuri, Hocquenghem (BCH code is further applied suppress the error floor derived from the ambiguity of the MAC channel when dealing with correlated sources. Simulation results are presented for several values of and diverse LDGM and BCH codes, based on which we conclude that the proposed scheme outperforms significantly (by up to 6.3 dB the suboptimum limit assuming separation between Slepian-Wolf source coding and capacity-achieving channel coding.

  2. Code of Conduct on the Safety and Security of Radioactive Sources and the Supplementary Guidance on the Import and Export of Radioactive Sources

    International Nuclear Information System (INIS)

    2005-01-01

    In operative paragraph 4 of its resolution GC(47)/RES/7.B, the General Conference, having welcomed the approval by the Board of Governors of the revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources (GC(47)/9), and while recognizing that the Code is not a legally binding instrument, urged each State to write to the Director General that it fully supports and endorses the IAEA's efforts to enhance the safety and security of radioactive sources and is working toward following the guidance contained in the IAEA Code of Conduct. In operative paragraph 5, the Director General was requested to compile, maintain and publish a list of States that have made such a political commitment. The General Conference, in operative paragraph 6, recognized that this procedure 'is an exceptional one, having no legal force and only intended for information, and therefore does not constitute a precedent applicable to other Codes of Conduct of the Agency or of other bodies belonging to the United Nations system'. In operative paragraph 7 of resolution GC(48)/RES/10.D, the General Conference welcomed the fact that more than 60 States had made political commitments with respect to the Code in line with resolution GC(47)/RES/7.B and encouraged other States to do so. In operative paragraph 8 of resolution GC(48)/RES/10.D, the General Conference further welcomed the approval by the Board of Governors of the Supplementary Guidance on the Import and Export of Radioactive Sources (GC(48)/13), endorsed this Guidance while recognizing that it is not legally binding, noted that more than 30 countries had made clear their intention to work towards effective import and export controls by 31 December 2005, and encouraged States to act in accordance with the Guidance on a harmonized basis and to notify the Director General of their intention to do so as supplementary information to the Code of Conduct, recalling operative paragraph 6 of resolution GC(47)/RES/7.B. 4. The

  3. Distribution and Sources of Nitrate-Nitrogen in Kansas Groundwater

    Directory of Open Access Journals (Sweden)

    Margaret A. Townsend

    2001-01-01

    Full Text Available Kansas is primarily an agricultural state. Irrigation water and fertilizer use data show long- term increasing trends. Similarly, nitrate-N concentrations in groundwater show long-term increases and exceed the drinking-water standard of 10 mg/l in many areas. A statistical analysis of nitrate-N data collected for local and regional studies in Kansas from 1990 to 1998 (747 samples found significant relationships between nitrate-N concentration with depth, age, and geographic location of wells. Sources of nitrate-N have been identified for 297 water samples by using nitrogen stable isotopes. Of these samples, 48% showed fertilizer sources (+2 to +8 and 34% showed either animal waste sources (+10 to +15 with nitrate-N greater than 10 mg/l or indication that enrichment processes had occurred (+10 or above with variable nitrate-N or both. Ultimate sources for nitrate include nonpoint sources associated with past farming and fertilization practices, and point sources such as animal feed lots, septic systems, and commercial fertilizer storage units. Detection of nitrate from various sources in aquifers of different depths in geographically varied areas of the state indicates that nonpoint and point sources currently impact and will continue to impact groundwater under current land uses.

  4. Dose Distributions of an 192Ir Brachytherapy Source in Different Media

    Directory of Open Access Journals (Sweden)

    C. H. Wu

    2014-01-01

    Full Text Available This study used MCNPX code to investigate the brachytherapy 192Ir dose distributions in water, bone, and lung tissue and performed radiophotoluminescent glass dosimeter measurements to verify the obtained MCNPX results. The results showed that the dose-rate constant, radial dose function, and anisotropy function in water were highly consistent with data in the literature. However, the lung dose near the source would be overestimated by up to 12%, if the lung tissue is assumed to be water, and, hence, if a tumor is located in the lung, the tumor dose will be overestimated, if the material density is not taken into consideration. In contrast, the lung dose far from the source would be underestimated by up to 30%. Radial dose functions were found to depend not only on the phantom size but also on the material density. The phantom size affects the radial dose function in bone more than those in the other tissues. On the other hand, the anisotropy function in lung tissue was not dependent on the radial distance. Our simulation results could represent valid clinical reference data and be used to improve the accuracy of the doses delivered during brachytherapy applied to patients with lung cancer.

  5. Two Dimensional Verification of the Dose Distribution of Gamma Knife Model C using Monte Carlo Simulation with a Virtual Source

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae-Hoon; Kim, Yong-Kyun; Lee, Cheol Ho; Son, Jaebum; Lee, Sangmin; Kim, Dong Geon; Choi, Joonbum; Jang, Jae Yeong [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun-Tai [Seoul National University, Seoul (Korea, Republic of)

    2016-10-15

    Gamma Knife model C contains 201 {sup 60}Co sources located on a spherical surface, so that each beam is concentrated on the center of the sphere. In the last work, we simulated the Gamma Knife model C through Monte Carlo simulation code using Geant4. Instead of 201 multi-collimation system, we made one single collimation system that collects source parameter passing through the collimator helmet. Using the virtual source, we drastically reduced the simulation time to transport 201 gamma circle beams to the target. Gamma index has been widely used to compare two dose distributions in cancer radiotherapy. Gamma index pass rates were compared in two calculated results using the virtual source method and the original method and measured results obtained using radiocrhomic films. A virtual source method significantly reduces simulation time of a Gamma Knife Model C and provides equivalent absorbed dose distributions as that of the original method showing Gamma Index pass rate close to 100% under 1mm/3% criteria. On the other hand, it gives a little narrow dose distribution compared to the film measurement showing Gamma Index pass rate of 94%. More accurate and sophisticated examination on the accuracy of the simulation and film measurement is necessary.

  6. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    International Nuclear Information System (INIS)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C

  7. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  8. Analysis of mitigating measures during steam/hydrogen distributions in nuclear reactor containments with the 3D field code gasflow

    International Nuclear Information System (INIS)

    Royl, P.; Travis, J.R.; Haytcher, E.A.; Wilkening, H.

    1997-01-01

    This paper reports on the recent model additions to the 3D field code GASFLOW and on validation and application analyses for steam/hydrogen transport with inclusion of mitigation measures. The results of the 3D field simulation of the HDR test E11.2 are summarized. Results from scoping analyses that simulate different modes of CO2 inertization for conditions from the HDR test T31.5 are presented. The last part discusses different ways of recombiner modeling during 3D distribution simulations and gives the results from validation calculations for the HDR recombiner test E11.8.1 and the Battelle test MC3. The results demonstrate that field code simulations with computer codes like GASFLOW are feasible today for complex containment geometries and that they are necessary for a reliable prediction of hydrogen/steam distribution and mitigation effects. (author)

  9. Dose rate distribution of the GammaBeam: 127 irradiator using MCNPX code

    International Nuclear Information System (INIS)

    Gual, Maritza Rodriguez; Batista, Adriana de Souza Medeiros; Pereira, Claubia; Faria, Luiz O. de; Grossi, Pablo Andrade

    2013-01-01

    The GammaBeam - 127 Irradiator is widely used for biological, chemical and medical applications of the gamma irradiation technology using Cobalt 60 radioactive at the Centro de Desenvolvimento da Tecnologia Nuclear CDTN/CNEN, Belo Horizonte, Brazil. The source has maximum activity of 60.000Ci, which is composed by 16 double encapsulated radioactive pencils placed in a rack. The facility is classified by the IAEA as Category II (dry storage facility). The aim of this work is to present a modelling developed to evaluate the dose rates at the irradiation room and the dose distribution at the irradiated products. In addition, the simulations could be used as a predictive tool of dose evaluation in the irradiation facility helping benchmark experiments in new similar facilities. The MCNPX simulated results were compared and validated with radiometric measurements using Fricke and TLDs dosimeters along several positions inside the irradiation room. (author)

  10. ZOCO V - a computer code for the calculation of time-dependent spatial pressure distribution in reactor containments

    International Nuclear Information System (INIS)

    Mansfeld, G.; Schally, P.

    1978-06-01

    ZOCO V is a computer code which can calculate the time- and space- dependent pressure distribution in containments of water-cooled nuclear power reactors (both full pressure containments and pressure suppression systems) following a loss-of-coolant accident, caused by the rupture of a main coolant or steam pipe

  11. X-Ray imager power source on distribution trailers

    International Nuclear Information System (INIS)

    Johns, B.R.

    1996-01-01

    This Acceptance for Beneficial Use documents the work completed on the addition of an X-ray cable reel on distribution trailer HO-64-3533 for core sampling equipment. Work and documentation remaining to be completed is identified

  12. Study of Different Tissue Density Effects on the Dose Distribution of a 103Pd Brachytherapy Source Model MED3633

    Directory of Open Access Journals (Sweden)

    Ali Asghar Mowlavi

    2010-09-01

    Full Text Available Introduction: Clinical application of encapsulated radioactive brachytherapy sources has a major role in cancer treatment. In the present research, the effects of different tissue densities on the dose distribution of a 103Pd brachytherapy source in a spherical phantom of 50 cm radius have been studied. Material and Methods: As is well known, absorbed dose in tissue depends to its density, but this difference is not clear in measurements. Therefore, we applied the MCNP code to evaluate the effect of density on the dose distribution. 103Pd brachytherapy sources are used to treat prostate, breast and other cancers. Results: Absorbed dose has been calculated and presented around a source placed in the center of the phantom for different tissue densities. Also, we derived anisotropy and radial dose functions and compared our Monte Carlo results with experimental results of Rivard and Li et al. for F(1, θ and g(r in 1.040 g/cm3 tissue. Conclusion: The results of this study show that relative dose variations around the source center are very considerable at different densities, because of the presence of a photoabsorber (Au-Cu alloy in the source core. Dose variation exceeds 80% at the point (Z = 2.4 mm, Y = 0 mm. Computed values of anisotropy and radial dose functions are in good agreement with the experimental results of Rivard and Li et al.

  13. Development of Level-2 PSA Technology: A Development of the Database of the Parametric Source Term for Kori Unit 1 Using the MAAP4 Code

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Chang Soon; Mun, Ju Hyun; Yun, Jeong Ick; Cho, Young Hoo; Kim, Chong Uk [Seoul National University, Seoul (Korea, Republic of)

    1997-07-15

    To quantify the severe accident source term of the parametric model method, the uncertainty of the parameters should be analyzed. Generally, to analyze the uncertainties, the cumulative distribution functions(CDF`S) of the parameters are derived. This report introduces a method of derivation of the CDF`s of the basic parameters, FCOR, FVES and FDCH. The calculation tool of the source term is the MAAP version 4.0. In the MAAP code, there are model parameters to consider an uncertain physical and/or chemical phenomenon. In general, the parameters have not a point value but a range. In this paper, considering this point, the input values of model parameters influencing each parameter are sampled using LHS. Then, the calculation results are shown in the cumulative distribution form. For a case study, the CDF`s of FCOR, FVES and FDCH of KORI unit 1 are derived. The target scenarios for the calculation are the ones whose initial events are large LOCA, small LOCA and transient, respectively. It is found that the distributions of this study are consistent to those of NUREG-1150 and are proven to be adequate in assessing the uncertainties in the severe accident source term of KORI Unit 1. 15 refs., 27 tabs., 4 figs. (author)

  14. Analysis of source term aspects in the experiment Phebus FPT1 with the MELCOR and CFX codes

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Fuertes, F. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)]. E-mail: francisco.martinfuertes@upm.es; Barbero, R. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Martin-Valdepenas, J.M. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Jimenez, M.A. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2007-03-15

    Several aspects related to the source term in the Phebus FPT1 experiment have been analyzed with the help of MELCOR 1.8.5 and CFX 5.7 codes. Integral aspects covering circuit thermalhydraulics, fission product and structural material release, vapours and aerosol retention in the circuit and containment were studied with MELCOR, and the strong and weak points after comparison to experimental results are stated. Then, sensitivity calculations dealing with chemical speciation upon release, vertical line aerosol deposition and steam generator aerosol deposition were performed. Finally, detailed calculations concerning aerosol deposition in the steam generator tube are presented. They were obtained by means of an in-house code application, named COCOA, as well as with CFX computational fluid dynamics code, in which several models for aerosol deposition were implemented and tested, while the models themselves are discussed.

  15. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  16. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  17. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    International Nuclear Information System (INIS)

    Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.

    2006-01-01

    The measurement of the stationarity of Monte Carlo fission source distributions in k eff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  18. Volatile Organic Compounds: Characteristics, distribution and sources in urban schools

    Science.gov (United States)

    Mishra, Nitika; Bartsch, Jennifer; Ayoko, Godwin A.; Salthammer, Tunga; Morawska, Lidia

    2015-04-01

    Long term exposure to organic pollutants, both inside and outside school buildings may affect children's health and influence their learning performance. Since children spend significant amount of time in school, air quality, especially in classrooms plays a key role in determining the health risks associated with exposure at schools. Within this context, the present study investigated the ambient concentrations of Volatile Organic Compounds (VOCs) in 25 primary schools in Brisbane with the aim to quantify the indoor and outdoor VOCs concentrations, identify VOCs sources and their contribution, and based on these; propose mitigation measures to reduce VOCs exposure in schools. One of the most important findings is the occurrence of indoor sources, indicated by the I/O ratio >1 in 19 schools. Principal Component Analysis with Varimax rotation was used to identify common sources of VOCs and source contribution was calculated using an Absolute Principal Component Scores technique. The result showed that outdoor 47% of VOCs were contributed by petrol vehicle exhaust but the overall cleaning products had the highest contribution of 41% indoors followed by air fresheners and art and craft activities. These findings point to the need for a range of basic precautions during the selection, use and storage of cleaning products and materials to reduce the risk from these sources.

  19. A new open-source pin power reconstruction capability in DRAGON5 and DONJON5 neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Chambon, R., E-mail: richard-pierre.chambon@polymtl.ca; Hébert, A., E-mail: alain.hebert@polymtl.ca

    2015-08-15

    In order to better optimize the fuel energy efficiency in PWRs, the burnup distribution has to be known as accurately as possible, ideally in each pin. However, this level of detail is lost when core calculations are performed with homogenized cross-sections. The pin power reconstruction (PPR) method can be used to get back those levels of details as accurately as possible in a small additional computing time frame compared to classical core calculations. Such a de-homogenization technique for core calculations using arbitrarily homogenized fuel assembly geometries was presented originally by Fliscounakis et al. In our work, the same methodology was implemented in the open-source neutronic codes DRAGON5 and DONJON5. The new type of Selengut homogenization, called macro-calculation water gap, also proposed by Fliscounakis et al. was implemented. Some important details on the methodology were emphasized in order to get precise results. Validation tests were performed on 12 configurations of 3×3 clusters where simulations in transport theory and in diffusion theory followed by pin-power reconstruction were compared. The results shows that the pin power reconstruction and the Selengut macro-calculation water gap methods were correctly implemented. The accuracy of the simulations depends on the SPH method and on the homogenization geometry choices. Results show that the heterogeneous homogenization is highly recommended. SPH techniques were investigated with flux-volume and Selengut normalization, but the former leads to inaccurate results. Even though the new Selengut macro-calculation water gap method gives promising results regarding flux continuity at assembly interfaces, the classical Selengut approach is more reliable in terms of maximum and average errors in the whole range of configurations.

  20. OPT-TWO: Calculation code for two-dimensional MOX fuel models in the optimum concentration distribution

    International Nuclear Information System (INIS)

    Sato, Shohei; Okuno, Hiroshi; Sakai, Tomohiro

    2007-08-01

    OPT-TWO is a calculation code which calculates the optimum concentration distribution, i.e., the most conservative concentration distribution in the aspect of nuclear criticality safety, of MOX (mixed uranium and plutonium oxide) fuels in the two-dimensional system. To achieve the optimum concentration distribution, we apply the principle of flattened fuel importance distribution with which the fuel system has the highest reactivity. Based on this principle, OPT-TWO takes the following 3 calculation steps iteratively to achieve the optimum concentration distribution with flattened fuel importance: (1) the forward and adjoint neutron fluxes, and the neutron multiplication factor, with TWOTRAN code which is a two-dimensional neutron transport code based on the SN method, (2) the fuel importance, and (3) the quantity of the transferring fuel. In OPT-TWO, the components of MOX fuel are MOX powder, uranium dioxide powder and additive. This report describes the content of the calculation, the computational method, and the installation method of the OPT-TWO, and also describes the application method of the criticality calculation of OPT-TWO. (author)

  1. Acetone in the atmosphere: Distribution, sources, and sinks

    Science.gov (United States)

    Singh, H. B.; O'Hara, D.; Herlth, D.; Sachse, W.; Blake, D. R.; Bradshaw, J. D.; Kanakidou, M.; Crutzen, P. J.

    1994-01-01

    Acetone (CH3COCH3) was found to be the dominant nonmethane organic species present in the atmosphere sampled primarily over eastern Canada (0-6 km, 35 deg-65 deg N) during ABLE3B (July to August 1990). A concentration range of 357 to 2310 ppt (= 10(exp -12) v/v) with a mean value of 1140 +/- 413 ppt was measured. Under extremely clean conditions, generally involving Arctic flows, lowest (background) mixing ratios of 550 +/- 100 ppt were present in much of the troposphere studied. Correlations between atmospheric mixing ratios of acetone and select species such as C2H2, CO, C3H8, C2Cl4 and isoprene provided important clues to its possible sources and to the causes of its atmospheric variability. Biomass burning as a source of acetone has been identified for the first time. By using atmospheric data and three-dimensional photochemical models, a global acetone source of 40-60 Tg (= 10(exp 12) g)/yr is estimated to be present. Secondary formation from the atmospheric oxidation of precursor hydrocarbons (principally propane, isobutane, and isobutene) provides the single largest source (51%). The remainder is attributable to biomass burning (26%), direct biogenic emissions (21%), and primary anthropogenic emissions (3%). Atmospheric removal of acetone is estimated to be due to photolysis (64%), reaction with OH radicals (24%), and deposition (12%). Model calculations also suggest that acetone photolysis contributed significantly to PAN formation (100-200 ppt) in the middle and upper troposphere of the sampled region and may be important globally. While the source-sink equation appears to be roughly balanced, much more atmospheric and source data, especially from the southern hemisphere, are needed to reliably quantify the atmospheric budget of acetone.

  2. Study of relationship between radioactivity distribution, contamination burden and quality standard, accommodate energy of code river Yogyakarta

    International Nuclear Information System (INIS)

    Agus Taftazani and Muzakky

    2009-01-01

    Study of relationship between distribution, contamination burden of gross β radioactivity and natural radionuclide in water and sediment sample from 11 observation station Code river to quality standard and maximum capacity of Code river have been done. Natural radio nuclides identification and gross β radioactivity measurement of condensed water, dry and homogeneous sediment powder (past through 100 mesh sieve) samples have been done by using spectrometer and GM counter. Radioactivity data was analyzed descriptive with histogram to show the spreading pattern of data. Contamination burden data, quality standard and maximum capacity of river Code was to descriptive analyzed by line diagram to knowing relationship between contamination burden, quality standard, and maximum capacity of Code river. The observation of water and sediment at 11 observation station show that the emitter natural radionuclides: 210 Pb, 212 Pb, 214 Pb, 226 Ra, 208 Tl, 214 Bi, 228 Ac and 40 K were detected. The analytical result conclusion was that the pattern spread of average activity gross β and were increase from upstream to downstream of the Code river samples. Contamination burden, quality standard and maximum capacity of radionuclide activity of 210 Pb, 212 Pb, 226 Ra and 228 Ac were more smaller than quality standard of river water according to regulation of Nuclear Energy Regulatory Agency 02/Ka-BAPETEN/V-99 concerning quality standard of radioactivity. It’s mean that Code river still in good contamination burden for the four radionuclides. (author)

  3. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources

    International Nuclear Information System (INIS)

    Gaufridy de Dortan, F. de

    2006-01-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  4. Recycling source terms for edge plasma fluid models and impact on convergence behaviour of the BRAAMS 'B2' code

    International Nuclear Information System (INIS)

    Maddison, G.P.; Reiter, D.

    1994-02-01

    Predictive simulations of tokamak edge plasmas require the most authentic description of neutral particle recycling sources, not merely the most expedient numerically. Employing a prototypical ITER divertor arrangement under conditions of high recycling, trial calculations with the 'B2' steady-state edge plasma transport code, plus varying approximations or recycling, reveal marked sensitivity of both results and its convergence behaviour to details of sources incorporated. Comprehensive EIRENE Monte Carlo resolution of recycling is implemented by full and so-called 'shot' intermediate cycles between the plasma fluid and statistical neutral particle models. As generally for coupled differencing and stochastic procedures, though, overall convergence properties become more difficult to assess. A pragmatic criterion for the 'B2'/EIRENE code system is proposed to determine its success, proceeding from a stricter condition previously identified for one particular analytic approximation of recycling in 'B2'. Certain procedures are also inferred potentially to improve their convergence further. (orig.)

  5. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  6. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  7. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  8. Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)

  9. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  10. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  11. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  12. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  13. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    Science.gov (United States)

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  14. Creating a database for evaluating the distribution of energy deposited at prostate using simulation in phantom with the Monte Carlo code EGSnrc

    International Nuclear Information System (INIS)

    Resende Filho, T.A.; Vieira, I.F.; Leal Neto, V.

    2009-01-01

    An exposition computational model (ECM) composed of a water tank phantom, a punctual and mono energetic source, emitter of photons, coupled to a Monte Carlo code to simulation the interaction and deposition of energy emitted by I-125, is a tool that presents many advantages to realize dosimetric evaluations in many areas as planning of a brachytherapy treatments. Using the DOSXYZnrc, was possible to construct a data bank allowing the final user estimates previously the space distribution of the prostate dose, being an important tool at the brachytherapy procedure. The results obtained show the fractional energy deposited into the water phantom evaluated on the energies 0.028 MeV and 0.035 MeV both indicated to this procedure, as well the dose distribution at the range between 0.10334 and 0.53156 μGy. The medium error is less than 2%, limited tolerance value considered at radiotherapy protocols. (author)

  15. Projection of Patient Condition Code Distributions Based on Mechanism of Injury

    National Research Council Canada - National Science Library

    Zouris, James M; Walker, G. J; Blood, Christopher G

    2003-01-01

    The Medical Readiness and Strategic Plan 1998-2004 requires that the military services develop a method for linking real world patient load data with modern patient condition codes to enable planners...

  16. Calculation of the power distribution in the fuel rods of the low power research reactor using the MCNP4C code

    International Nuclear Information System (INIS)

    Dawahra, S.; Khattab, K.

    2011-01-01

    Highlights: → The MCNP4C code was used to calculate the power distribution in 3-D geometry in the MNSR reactor. → The maximum power of the individual rod was found in the fuel ring number 2 and was found to be 105 W. → The minimum power was found in the fuel ring number 9 and was 79.9 W. → The total power in the total fuel rods was 30.9 kW. - Abstract: The Monte Carlo method, using the MCNP4C code, was used in this paper to calculate the power distribution in 3-D geometry in the fuel rods of the Syrian Miniature Neutron Source Reactor (MNSR). To normalize the MCNP4C result to the steady state nominal thermal power, the appropriate scaling factor was defined to calculate the power distribution precisely. The maximum power of the individual rod was found in the fuel ring number 2 and was found to be 105 W. The minimum power was found in the fuel ring number 9 and was 79.9 W. The total power in the total fuel rods was 30.9 kW. This result agrees very well with nominal power reported in the reactor safety analysis report which equals 30 kW. Finally, the peak power factors, which are defined as the ratios between the maximum to the average and the maximum to the minimum powers were calculated to be 1.18 and 1.31 respectively.

  17. Comparison of experimental pulse-height distributions in germanium detectors with integrated-tiger-series-code predictions

    International Nuclear Information System (INIS)

    Beutler, D.E.; Halbleib, J.A.; Knott, D.P.

    1989-01-01

    This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude

  18. Limiting precision in differential equation solvers. II Sources of trouble and starting a code

    International Nuclear Information System (INIS)

    Shampine, L.F.

    1978-01-01

    The reasons a class of codes for solving ordinary differential equations might want to use an extremely small step size are investigated. For this class the likelihood of precision difficulties is evaluated and remedies examined. The investigations suggests a way of selecting automatically an initial step size which should be reliably on scale

  19. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  20. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  1. Distribution and sources of 226Rain groundwater of arid region

    DEFF Research Database (Denmark)

    Zheng, M. J.; Murad, A.; Zhou, X. D.

    2016-01-01

    As a part of characterizing radioactivity in groundwater of the eastern Arabian Peninsula, a first systematic evaluation of 226Ra activity in groundwater indicates a wide range (0.65-203.66 mBq L-1) with average of 17.56 mBq L-1. Adsorption/desorption process, groundwater residence time and urani...... concentration are the main controlling factors of 226Ra distribution in groundwater of the different aquifers. Estimation of 226Ra effective dose from water ingestion suggests potential risk of drinking water from the carbonate aquifer....

  2. Distributed control system for the National Synchrotron Light Source

    International Nuclear Information System (INIS)

    Batchelor, K.; Culwick, B.B.; Goldstick, J.; Sheehan, J.; Smith, J.

    1979-01-01

    Until recently, accelerator and similar control systems have used modular interface hardware such as CAMAC or DATACON which translated digital computer commands transmitted over some data link into hardware device status and monitoring variables. Such modules possessed little more than local buffering capability in the processing of commands and data. The advent of the micro-processor has made available low cost small computers of significant computational capability. This paper describes how micro-computers including such micro-processors and associated memory, input/output devices and interrupt facilities have been incorporated into a distributed system for the control of the NSLS

  3. Space distribution of extragalactic sources - Cosmology versus evolution

    International Nuclear Information System (INIS)

    Cavaliere, A.; Maccacaro, T.

    1990-01-01

    Alternative cosmologies have been recurrently invoked to explain in terms of global spacetime structure the apparent large increase, with increasing redshift, in the average luminosity of active galactic nuclei. These models interestingly seek to avoid the complexities of the canonical interpretation in terms of intrinsic population evolutions in a Friedmann universe. However, a problem of consistency for these cosmologies is pointed out, since they have to include also other classes of extragalactic sources, such as clusters of galaxies and BL Lac objects, for which there is preliminary evidence of a different behavior. 40 refs

  4. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  5. Pre-Test Analysis of the MEGAPIE Spallation Source Target Cooling Loop Using the TRAC/AAA Code

    International Nuclear Information System (INIS)

    Bubelis, Evaldas; Coddington, Paul; Leung, Waihung

    2006-01-01

    A pilot project is being undertaken at the Paul Scherrer Institute in Switzerland to test the feasibility of installing a Lead-Bismuth Eutectic (LBE) spallation target in the SINQ facility. Efforts are coordinated under the MEGAPIE project, the main objectives of which are to design, build, operate and decommission a 1 MW spallation neutron source. The technology and experience of building and operating a high power spallation target are of general interest in the design of an Accelerator Driven System (ADS) and in this context MEGAPIE is one of the key experiments. The target cooling is one of the important aspects of the target system design that needs to be studied in detail. Calculations were performed previously using the RELAP5/Mod 3.2.2 and ATHLET codes, but in order to verify the previous code results and to provide another capability to model LBE systems, a similar study of the MEGAPIE target cooling system has been conducted with the TRAC/AAA code. In this paper a comparison is presented for the steady-state results obtained using the above codes. Analysis of transients, such as unregulated cooling of the target, loss of heat sink, the main electro-magnetic pump trip of the LBE loop and unprotected proton beam trip, were studied with TRAC/AAA and compared to those obtained earlier using RELAP5/Mod 3.2.2. This work extends the existing validation data-base of TRAC/AAA to heavy liquid metal systems and comprises the first part of the TRAC/AAA code validation study for LBE systems based on data from the MEGAPIE test facility and corresponding inter-code comparisons. (authors)

  6. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Hoyle, B.; et al.

    2017-08-04

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributions $n^i_{PZ}(z)$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $n^i(z)=n^i_{PZ}(z-\\Delta z^i)$ to correct the mean redshift of $n^i(z)$ for biases in $n^i_{\\rm PZ}$. The $\\Delta z^i$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $\\Delta z^i$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15

  7. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    Science.gov (United States)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; Rau, M. M.; De Vicente, J.; Hartley, W. G.; Gaztanaga, E.; DeRose, J.; Troxel, M. A.; Davis, C.; Alarcon, A.; MacCrann, N.; Prat, J.; Sánchez, C.; Sheldon, E.; Wechsler, R. H.; Asorey, J.; Becker, M. R.; Bonnett, C.; Carnero Rosell, A.; Carollo, D.; Carrasco Kind, M.; Castander, F. J.; Cawthon, R.; Chang, C.; Childress, M.; Davis, T. M.; Drlica-Wagner, A.; Gatti, M.; Glazebrook, K.; Gschwend, J.; Hinton, S. R.; Hoormann, J. K.; Kim, A. G.; King, A.; Kuehn, K.; Lewis, G.; Lidman, C.; Lin, H.; Macaulay, E.; Maia, M. A. G.; Martini, P.; Mudd, D.; Möller, A.; Nichol, R. C.; Ogando, R. L. C.; Rollins, R. P.; Roodman, A.; Ross, A. J.; Rozo, E.; Rykoff, E. S.; Samuroff, S.; Sevilla-Noarbe, I.; Sharp, R.; Sommer, N. E.; Tucker, B. E.; Uddin, S. A.; Varga, T. N.; Vielzeuf, P.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Busha, M. T.; Capozzi, D.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Kirk, D.; Krause, E.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Nord, B.; O'Neill, C. R.; Plazas, A. A.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.; Yanny, B.; Zuntz, J.; DES Collaboration

    2018-04-01

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the populations of galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z ≈ 0.2 and ≈1.3, and to produce initial estimates of the lensing-weighted redshift distributions n^i_PZ(z)∝ dn^i/dz for members of bin i. Accurate determination of cosmological parameters depends critically on knowledge of ni but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts n^i(z)=n^i_PZ(z-Δ z^i) to correct the mean redshift of ni(z) for biases in n^i_PZ. The Δzi are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the Δzi of the three lowest redshift bins are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15 < z < 0.9. This paper details the BPZ and COSMOS procedures, and demonstrates that the cosmological inference is insensitive to details of the ni(z) beyond the choice of Δzi. The clustering and COSMOS validation methods produce consistent estimates of Δzi in the bins where both can be applied, with combined uncertainties of σ _{Δ z^i}=0.015, 0.013, 0.011, and 0.022 in the four bins. Repeating the photo-z proceedure instead using the Directional Neighborhood Fitting (DNF) algorithm, or using the ni(z) estimated from the matched sample in COSMOS, yields no discernible difference in cosmological inferences.

  8. Transmission from theory to practice: Experiences using open-source code development and a virtual short course to increase the adoption of new theoretical approaches

    Science.gov (United States)

    Harman, C. J.

    2015-12-01

    Even amongst the academic community, new theoretical tools can remain underutilized due to the investment of time and resources required to understand and implement them. This surely limits the frequency that new theory is rigorously tested against data by scientists outside the group that developed it, and limits the impact that new tools could have on the advancement of science. Reducing the barriers to adoption through online education and open-source code can bridge the gap between theory and data, forging new collaborations, and advancing science. A pilot venture aimed at increasing the adoption of a new theory of time-variable transit time distributions was begun in July 2015 as a collaboration between Johns Hopkins University and The Consortium of Universities for the Advancement of Hydrologic Science (CUAHSI). There were four main components to the venture: a public online seminar covering the theory, an open source code repository, a virtual short course designed to help participants apply the theory to their data, and an online forum to maintain discussion and build a community of users. 18 participants were selected for the non-public components based on their responses in an application, and were asked to fill out a course evaluation at the end of the short course, and again several months later. These evaluations, along with participation in the forum and on-going contact with the organizer suggest strengths and weaknesses in this combination of components to assist participants in adopting new tools.

  9. Radiation Shielding Information Center: a source of computer codes and data for fusion neutronics studies

    International Nuclear Information System (INIS)

    McGill, B.L.; Roussin, R.W.; Trubey, D.K.; Maskewitz, B.F.

    1980-01-01

    The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  10. kspectrum: an open-source code for high-resolution molecular absorption spectra production

    International Nuclear Information System (INIS)

    Eymet, V.; Coustet, C.; Piaud, B.

    2016-01-01

    We present the kspectrum, scientific code that produces high-resolution synthetic absorption spectra from public molecular transition parameters databases. This code was originally required by the atmospheric and astrophysics communities, and its evolution is now driven by new scientific projects among the user community. Since it was designed without any optimization that would be specific to any particular application field, its use could also be extended to other domains. kspectrum produces spectral data that can subsequently be used either for high-resolution radiative transfer simulations, or for producing statistic spectral model parameters using additional tools. This is a open project that aims at providing an up-to-date tool that takes advantage of modern computational hardware and recent parallelization libraries. It is currently provided by Méso-Star (http://www.meso-star.com) under the CeCILL license, and benefits from regular updates and improvements. (paper)

  11. Calculation of dose distribution for 252Cf fission neutron source in tissue equivalent phantoms using Monte Carlo method

    International Nuclear Information System (INIS)

    Ji Gang; Guo Yong; Luo Yisheng; Zhang Wenzhong

    2001-01-01

    Objective: To provide useful parameters for neutron radiotherapy, the author presents results of a Monte Carlo simulation study investigating the dosimetric characteristics of linear 252 Cf fission neutron sources. Methods: A 252 Cf fission source and tissue equivalent phantom were modeled. The dose of neutron and gamma radiations were calculated using Monte Carlo Code. Results: The dose of neutron and gamma at several positions for 252 Cf in the phantom made of equivalent materials to water, blood, muscle, skin, bone and lung were calculated. Conclusion: The results by Monte Carlo methods were compared with the data by measurement and references. According to the calculation, the method using water phantom to simulate local tissues such as muscle, blood and skin is reasonable for the calculation and measurements of dose distribution for 252 Cf

  12. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  13. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System

    Directory of Open Access Journals (Sweden)

    Miao Sun

    2016-06-01

    Full Text Available We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  14. Verification of ANISN-F by calculating the neutron distribution from a Ra-Be source in water as well as by simple criticality calculations

    International Nuclear Information System (INIS)

    Etemad, M.A.

    1981-04-01

    The one dimensional discrete ordinates code ANISN-F was used to calculate the thermal neutron flux distribution in water from a Ra-Be neutron source. The calculations were performed in order to investigate the different possibilities of the code as well as to verify the results of the calculations in terms of comparisons to corresponding experimental data. Two different group cross section libraries were used in the calculations and conclusions were drawn on the adequacy of these libraries for a fixed source type calculation. Furthermore, critically calculations were performed for an infinite homogeneous slab of multiplying material using different angular and spatial approximations. The results of these calculations were then compared to the corresponding results previously obtained at this department by a different method and a different code. (author)

  15. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  16. Nitrogen deposition to the United States: distribution, sources, and processes

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We simulate nitrogen deposition over the US in 2006–2008 by using the GEOS-Chem global chemical transport model at 1/2°×2/3° horizontal resolution over North America and adjacent oceans. US emissions of NOx and NH3 in the model are 6.7 and 2.9 Tg N a−1 respectively, including a 20% natural contribution for each. Ammonia emissions are a factor of 3 lower in winter than summer, providing a good match to US network observations of NHx (≡NH3 gas + ammonium aerosol and ammonium wet deposition fluxes. Model comparisons to observed deposition fluxes and surface air concentrations of oxidized nitrogen species (NOy show overall good agreement but excessive wintertime HNO3 production over the US Midwest and Northeast. This suggests a model overestimate N2O5 hydrolysis in aerosols, and a possible factor is inhibition by aerosol nitrate. Model results indicate a total nitrogen deposition flux of 6.5 Tg N a−1 over the contiguous US, including 4.2 as NOy and 2.3 as NHx. Domestic anthropogenic, foreign anthropogenic, and natural sources contribute respectively 78%, 6%, and 16% of total nitrogen deposition over the contiguous US in the model. The domestic anthropogenic contribution generally exceeds 70% in the east and in populated areas of the west, and is typically 50–70% in remote areas of the west. Total nitrogen deposition in the model exceeds 10 kg N ha−1 a−1 over 35% of the contiguous US.

  17. Calculation of the effective dose from natural radioactivity sources in soil using MCNP code

    International Nuclear Information System (INIS)

    Krstic, D.; Nikezic, D.

    2008-01-01

    Full text: Effective dose delivered by photon emitted from natural radioactivity in soil was calculated in this report. Calculations have been done for the most common natural radionuclides in soil as 238 U, 232 Th series and 40 K. A ORNL age-dependent phantom and the Monte Carlo transport code MCNP-4B were employed to calculate the energy deposited in all organs of phantom.The effective dose was calculated according to ICRP74 recommendations. Conversion coefficients of effective dose per air kerma were determined. Results obtained here were compared with other authors

  18. In-vessel source term analysis code TRACER version 2.3. User's manual

    International Nuclear Information System (INIS)

    Toyohara, Daisuke; Ohno, Shuji; Hamada, Hirotsugu; Miyahara, Shinya

    2005-01-01

    A computer code TRACER (Transport Phenomena of Radionuclides for Accident Consequence Evaluation of Reactor) version 2.3 has been developed to evaluate species and quantities of fission products (FPs) released into cover gas during a fuel pin failure accident in an LMFBR. The TRACER version 2.3 includes new or modified models shown below. a) Both model: a new model for FPs release from fuel. b) Modified model for FPs transfer from fuel to bubbles or sodium coolant. c) Modified model for bubbles dynamics in coolant. Computational models, input data and output data of the TRACER version 2.3 are described in this user's manual. (author)

  19. Sources and distribution of trace elements in Estonian peat

    Science.gov (United States)

    Orru, Hans; Orru, Mall

    2006-10-01

    This paper presents the results of the distribution of trace elements in Estonian mires. Sixty four mires, representative of the different landscape units, were analyzed for the content of 16 trace elements (Cr, Mn, Ni, Cu, Zn, and Pb using AAS; Cd by GF-AAS; Hg by the cold vapour method; and V, Co, As, Sr, Mo, Th, and U by XRF) as well as other peat characteristics (peat type, degree of humification, pH and ash content). The results of the research show that concentrations of trace elements in peat are generally low: V 3.8 ± 0.6, Cr 3.1 ± 0.2, Mn 35.1 ± 2.7, Co 0.50 ± 0.05, Ni 3.7 ± 0.2, Cu 4.4 ± 0.3, Zn 10.0 ± 0.7, As 2.4 ± 0.3, Sr 21.9 ± 0.9, Mo 1.2 ± 0.2, Cd 0.12 ± 0.01, Hg 0.05 ± 0.01, Pb 3.3 ± 0.2, Th 0.47 ± 0.05, U 1.3 ± 0.2 μg g - 1 and S 0.25 ± 0.02%. Statistical analyses on these large database showed that Co has the highest positive correlations with many elements and ash content. As, Ni, Mo, ash content and pH are also significantly correlated. The lowest abundance of most trace elements was recorded in mires fed only by precipitation (ombrotrophic), and the highest in mires fed by groundwater and springs (minerotrophic), which are situated in the flood plains of river valleys. Concentrations usually differ between the superficial, middle and bottom peat layers, but the significance decreases depending on the type of mire in the following order: transitional mires - raised bogs - fens. Differences among mire types are highest for the superficial but not significant for the basal peat layers. The use of peat with high concentrations of trace elements in agriculture, horticulture, as fuel, for water purification etc., may pose a risk for humans: via the food chain, through inhalation, drinking water etc.

  20. Distribution and sources of particulate organic matter in the Indian monsoonal estuaries during monsoon

    Digital Repository Service at National Institute of Oceanography (India)

    Sarma, V.V.S.S.; Krishna, M.S.; Prasad, V.R.; Kumar, B.S.K.; Naidu, S.A.; Rao, G.D.; Viswanadham, R.; Sridevi, T.; Kumar, P.P.; Reddy, N.P.C.

    The distribution and sources of particulate organic carbon (POC) and nitrogen (PN) in 27 Indian estuaries were examined during the monsoon using the content and isotopic composition of carbon and nitrogen. Higher phytoplankton biomass was noticed...

  1. Increasing data distribution in BitTorrent networks by using network coding techniques

    DEFF Research Database (Denmark)

    Braun, Patrik János; Sipos, Marton A.; Ekler, Péter

    2015-01-01

    Abstract: Peer-to-peer networks are well known for their benefits when used for sharing data among multiple users. One of the most common protocols for shared data distribution is BitTorrent. Despite its popularity, it has some inefficiencies that affect the speed of the content distribution. In ...

  2. Parallelization of MCNP 4, a Monte Carlo neutron and photon transport code system, in highly parallel distributed memory type computer

    International Nuclear Information System (INIS)

    Masukawa, Fumihiro; Takano, Makoto; Naito, Yoshitaka; Yamazaki, Takao; Fujisaki, Masahide; Suzuki, Koichiro; Okuda, Motoi.

    1993-11-01

    In order to improve the accuracy and calculating speed of shielding analyses, MCNP 4, a Monte Carlo neutron and photon transport code system, has been parallelized and measured of its efficiency in the highly parallel distributed memory type computer, AP1000. The code has been analyzed statically and dynamically, then the suitable algorithm for parallelization has been determined for the shielding analysis functions of MCNP 4. This includes a strategy where a new history is assigned to the idling processor element dynamically during the execution. Furthermore, to avoid the congestion of communicative processing, the batch concept, processing multi-histories by a unit, has been introduced. By analyzing a sample cask problem with 2,000,000 histories by the AP1000 with 512 processor elements, the 82 % of parallelization efficiency is achieved, and the calculational speed has been estimated to be around 50 times as fast as that of FACOM M-780. (author)

  3. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  4. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  5. A method for scientific code coupling in a distributed environment; Une methodologie pour le couplage de codes scientifiques en environnement distribue

    Energy Technology Data Exchange (ETDEWEB)

    Caremoli, C; Beaucourt, D; Chen, O; Nicolas, G; Peniguel, C; Rascle, P; Richard, N; Thai Van, D; Yessayan, A

    1994-12-01

    This guide book deals with coupling of big scientific codes. First, the context is introduced: big scientific codes devoted to a specific discipline coming to maturity, and more and more needs in terms of multi discipline studies. Then we describe different kinds of code coupling and an example of code coupling: 3D thermal-hydraulic code THYC and 3D neutronics code COCCINELLE. With this example we identify problems to be solved to realize a coupling. We present the different numerical methods usable for the resolution of coupling terms. This leads to define two kinds of coupling: with the leak coupling, we can use explicit methods, and with the strong coupling we need to use implicit methods. On both cases, we analyze the link with the way of parallelizing code. For translation of data from one code to another, we define the notion of Standard Coupling Interface based on a general structure for data. This general structure constitutes an intermediary between the codes, thus allowing a relative independence of the codes from a specific coupling. The proposed method for the implementation of a coupling leads to a simultaneous run of the different codes, while they exchange data. Two kinds of data communication with message exchange are proposed: direct communication between codes with the use of PVM product (Parallel Virtual Machine) and indirect communication with a coupling tool. This second way, with a general code coupling tool, is based on a coupling method, and we strongly recommended to use it. This method is based on the two following principles: re-usability, that means few modifications on existing codes, and definition of a code usable for coupling, that leads to separate the design of a code usable for coupling from the realization of a specific coupling. This coupling tool available from beginning of 1994 is described in general terms. (authors). figs., tabs.

  6. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    Science.gov (United States)

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  7. Y-Source Boost DC/DC Converter for Distributed Generation

    DEFF Research Database (Denmark)

    Siwakoti, Yam P.; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    This paper introduces a versatile Y-source boost dc/dc converter intended for distributed power generation, where high gain is often demanded. The proposed converter uses a Y-source impedance network realized with a tightly coupled three-winding inductor for high voltage boosting that is presently...

  8. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  9. Bug-Fixing and Code-Writing: The Private Provision of Open Source Software

    DEFF Research Database (Denmark)

    Bitzer, Jürgen; Schröder, Philipp

    2002-01-01

    Open source software (OSS) is a public good. A self-interested individual would consider providing such software, if the benefits he gained from having it justified the cost of programming. Nevertheless each agent is tempted to free ride and wait for others to develop the software instead...

  10. SETMDC: Preprocessor for CHECKR, FIZCON, INTER, etc. ENDF Utility source codes

    International Nuclear Information System (INIS)

    Dunford, Charles L.

    2002-01-01

    Description of program or function: SETMDC-6.13 is a utility program that converts the source decks of the following set of programs to different computers: CHECKR-6.13; FIZCON-6.13; GETMAT-6.13; INTER-6.13; LISTEF-6; PLOTEF-6; PSYCHE-6; STANEF-6.13

  11. ON CODE REFACTORING OF THE DIALOG SUBSYSTEM OF CDSS PLATFORM FOR THE OPEN-SOURCE MIS OPENMRS

    Directory of Open Access Journals (Sweden)

    A. V. Semenets

    2016-08-01

    The open-source MIS OpenMRS developer tools and software API are reviewed. The results of code refactoring of the dialog subsystem of the CDSS platform which is made as module for the open-source MIS OpenMRS are presented. The structure of information model of database of the CDSS dialog subsystem was updated according with MIS OpenMRS requirements. The Model-View-Controller (MVC based approach to the CDSS dialog subsystem architecture was re-implemented with Java programming language using Spring and Hibernate frameworks. The MIS OpenMRS Encounter portlet form for the CDSS dialog subsystem integration is developed as an extension. The administrative module of the CDSS platform is recreated. The data exchanging formats and methods for interaction of OpenMRS CDSS dialog subsystem module and DecisionTree GAE service are re-implemented with help of AJAX technology via jQuery library

  12. Assessment of the signal intensity distribution pattern within the unruptured cerebral aneurysms using color-coded 3D MR angiography

    International Nuclear Information System (INIS)

    Satoh, Toru; Omi, Megumi; Ohsako, Chika

    2005-01-01

    To evaluate the interaction between the MR signal intensity distribution pattern and bleb formation/deformation of the aneurysmal dome, fifty cases of the unruptured cerebral aneurysms were investigated with the color-coded 3D MR angiography. Patterns were categorized into central-type, neck-type and peripheral-type according to the distribution of MR signals with low-, moderate- and high signal intensity areas. Imaging analysis revealed the significant relationship (P<0.02) of the peripheral-type aneurysms to the bleb formation and deformation of the dome, compared with those of central- and neck-type. Additionally, peripheral-type signal intensity distribution pattern was shown with aneurysms harboring relatively large dome size and lateral-type growth including internal carotid aneurysms. Prospective analysis of intraaneurysmal flow pattern with the color-coded 3D MR angiography may provide patient-specific analysis of intraaneurysmal flow status in relation to the morphological change of the corresponding aneurysmal dome in the management of unruptured cerebral aneurysms. (author)

  13. Blind cooperative diversity using distributed space-time coding in block fading channels

    KAUST Repository

    Tourki, Kamel; Alouini, Mohamed-Slim; Deneire, Luc

    2010-01-01

    Mobile users with single antennas can still take advantage of spatial diversity through cooperative space-time encoded transmission. In this paper, we consider a scheme in which a relay chooses to cooperate only if its source-relay channel

  14. The study on neutron and photon distribution of AP1000 reactor by MCNP code

    International Nuclear Information System (INIS)

    Chen Defeng; Shen Mingqi

    2014-01-01

    The core and reactor structural of AP1000 was modeled by the MCNP calculation program which is based on the Monte Carlo method in this paper, the neutron and photon distribution of AP1000 reactor core was calculated by the conditions of reactor critical. The results show that the AP1000 reactor neutron and photon distribution is in accordance with the critical design of PWR. (authors)

  15. CRISTE - a subcomputer code for axial distribution, transient, of temperatures in a reactor channel of PWR

    International Nuclear Information System (INIS)

    Silva Neto, A.J. da; Roberty, N.C.; Carmo, E.G.D. do.

    1983-12-01

    The subroutine CRISTE was developed to calculate the temperature distribution for transients in a PWR coolant. The Crank-Nicholson approximation was used for the temporal discretization and a semi-analytical spatial solution was obtained. The temperature in the cladding was simulated by a routine adapted from the permanent distribution, and was used in on iterative method, following CRISTE subroutine. (E.G.) [pt

  16. An alternative technique for simulating volumetric cylindrical sources in the Morse code utilization

    International Nuclear Information System (INIS)

    Vieira, W.J.; Mendonca, A.G.

    1985-01-01

    In the solution of deep-penetration problems using the Monte Carlo method, calculation techniques and strategies are used in order to increase the particle population in the regions of interest. A common procedure is the coupling of bidimensional calculations, with (r,z) discrete ordinates transformed into source data, and tridimensional Monte Carlo calculations. An alternative technique for this procedure is presented. This alternative proved effective when applied to a sample problem. (F.E.) [pt

  17. On-line generation of core monitoring power distribution in the SCOMS couppled with core design code

    International Nuclear Information System (INIS)

    Lee, K. B.; Kim, K. K.; In, W. K.; Ji, S. K.; Jang, M. H.

    2002-01-01

    The paper provides the description of the methodology and main program module of power distribution calculation of SCOMS(SMART COre Monitoring System). The simulation results of the SMART core using the developed SCOMS are included. The planar radial peaking factor(Fxy) is relatively high in SMART core because control banks are inserted to the core at normal operation. If the conventional core monitoring method is adapted to SMART, highly skewed planar radial peaking factor Fxy yields an excessive conservatism and reduces the operation margin. In addition to this, the error of the core monitoring would be enlarged and thus operating margin would be degraded, because it is impossible to precalculate the core monitoring constants for all the control banks configurations taking into account the operation history in the design stage. To get rid of these drawbacks in the conventional power distribution calculation methodology, new methodology to calculate the three dimensional power distribution is developed. Core monitoring constants are calculated with the core design code (MASTER) which is on-line coupled with SCOMS. Three dimensional (3D) power distribution and the several peaking factors are calculated using the in-core detector signals and core monitoring constant provided at real time. Developed methodology is applied to the SMART core and the various core states are simulated. Based on the simulation results, it is founded that the three dimensional peaking factor to calculate the Linear Power Density and the pseudo hot-pin axial power distribution to calculate the Departure Nucleate Boiling Ratio show the more conservative values than those of the best-estimated core design code, and SCOMS adapted developed methodology can secures the more operation margin than the conventional methodology

  18. Voltage management of distribution networks with high penetration of distributed photovoltaic generation sources

    Science.gov (United States)

    Alyami, Saeed

    Installation of photovoltaic (PV) units could lead to great challenges to the existing electrical systems. Issues such as voltage rise, protection coordination, islanding detection, harmonics, increased or changed short-circuit levels, etc., need to be carefully addressed before we can see a wide adoption of this environmentally friendly technology. Voltage rise or overvoltage issues are of particular importance to be addressed for deploying more PV systems to distribution networks. This dissertation proposes a comprehensive solution to deal with the voltage violations in distribution networks, from controlling PV power outputs and electricity consumption of smart appliances in real time to optimal placement of PVs at the planning stage. The dissertation is composed of three parts: the literature review, the work that has already been done and the future research tasks. An overview on renewable energy generation and its challenges are given in Chapter 1. The overall literature survey, motivation and the scope of study are also outlined in the chapter. Detailed literature reviews are given in the rest of chapters. The overvoltage and undervoltage phenomena in typical distribution networks with integration of PVs are further explained in Chapter 2. Possible approaches for voltage quality control are also discussed in this chapter, followed by the discussion on the importance of the load management for PHEVs and appliances and its benefits to electric utilities and end users. A new real power capping method is presented in Chapter 3 to prevent overvoltage by adaptively setting the power caps for PV inverters in real time. The proposed method can maintain voltage profiles below a pre-set upper limit while maximizing the PV generation and fairly distributing the real power curtailments among all the PV systems in the network. As a result, each of the PV systems in the network has equal opportunity to generate electricity and shares the responsibility of voltage

  19. Flows and Stratification of an Enclosure Containing Both Localised and Vertically Distributed Sources of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2013-11-01

    We examine the flows and stratification established in a naturally ventilated enclosure containing both a localised and vertically distributed source of buoyancy. The enclosure is ventilated through upper and lower openings which connect the space to an external ambient. Small scale laboratory experiments were carried out with water as the working medium and buoyancy being driven directly by temperature differences. A point source plume gave localised heating while the distributed source was driven by a controllable heater mat located in the side wall of the enclosure. The transient temperatures, as well as steady state temperature profiles, were recorded and are reported here. The temperature profiles inside the enclosure were found to be dependent on the effective opening area A*, a combination of the upper and lower openings, and the ratio of buoyancy fluxes from the distributed and localised source Ψ =Bw/Bp . Industrial CASE award with ARUP.

  20. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local universe

    DEFF Research Database (Denmark)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-01-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe....... Assuming that the distribution of the neutrino sources follows that of matter we look for correlations between `warm' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance...... (including that of IceCube-Gen2) we demonstrate that sources with local density exceeding $10^{-6} \\, \\text{Mpc}^{-3}$ and neutrino luminosity $L_{\

  1. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  2. The Integration of Renewable Energy Sources into Electric Power Distribution Systems, Vol. II Utility Case Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Zaininger, H.W.

    1994-01-01

    Electric utility distribution system impacts associated with the integration of renewable energy sources such as photovoltaics (PV) and wind turbines (WT) are considered in this project. The impacts are expected to vary from site to site according to the following characteristics: the local solar insolation and/or wind characteristics, renewable energy source penetration level, whether battery or other energy storage systems are applied, and local utility distribution design standards and planning practices. Small, distributed renewable energy sources are connected to the utility distribution system like other, similar kW- and MW-scale equipment and loads. Residential applications are expected to be connected to single-phase 120/240-V secondaries. Larger kW-scale applications may be connected to three+phase secondaries, and larger hundred-kW and y-scale applications, such as MW-scale windfarms, or PV plants, may be connected to electric utility primary systems via customer-owned primary and secondary collection systems. In any case, the installation of small, distributed renewable energy sources is expected to have a significant impact on local utility distribution primary and secondary system economics. Small, distributed renewable energy sources installed on utility distribution systems will also produce nonsite-specific utility generation system benefits such as energy and capacity displacement benefits, in addition to the local site-specific distribution system benefits. Although generation system benefits are not site-specific, they are utility-specific, and they vary significantly among utilities in different regions. In addition, transmission system benefits, environmental benefits and other benefits may apply. These benefits also vary significantly among utilities and regions. Seven utility case studies considering PV, WT, and battery storage were conducted to identify a range of potential renewable energy source distribution system applications. The

  3. Advanced Neutron Source Dynamic Model (ANSDM) code description and user guide

    International Nuclear Information System (INIS)

    March-Leuba, J.

    1995-08-01

    A mathematical model is designed that simulates the dynamic behavior of the Advanced Neutron Source (ANS) reactor. Its main objective is to model important characteristics of the ANS systems as they are being designed, updated, and employed; its primary design goal, to aid in the development of safety and control features. During the simulations the model is also found to aid in making design decisions for thermal-hydraulic systems. Model components, empirical correlations, and model parameters are discussed; sample procedures are also given. Modifications are cited, and significant development and application efforts are noted focusing on examination of instrumentation required during and after accidents to ensure adequate monitoring during transient conditions

  4. Basic design of the HANARO cold neutron source using MCNP code

    International Nuclear Information System (INIS)

    Yu, Yeong Jin; Lee, Kye Hong; Kim, Young Jin; Hwang, Dong Gil

    2005-01-01

    The design of the Cold Neutron Source (CNS) for the HANARO research reactor is on progress. The CNS produces neutrons in the low energy range less than 5meV using liquid hydrogen at around 21.6 K as the moderator. The primary goal for the CNS design is to maximize the cold neutron flux with wavelengths of around 2 ∼ 12 A and to minimize the nuclear heat load. In this paper, the basic design of the HANARO CNS is described

  5. Transluminal color-coded three-dimensional magnetic resonance angiography for visualization of signal Intensity distribution pattern within an unruptured cerebral aneurysm: preliminarily assessment with anterior communicating artery aneurysms

    International Nuclear Information System (INIS)

    Satoh, T.; Ekino, C.; Ohsako, C.

    2004-01-01

    The natural history of unruptured cerebral aneurysm is not known; also unknown is the potential growth and rupture in any individual aneurysm. The authors have developed transluminal color-coded three-dimensional magnetic resonance angiography (MRA) obtained by a time-of-flight sequence to investigate the interaction between the intra-aneurysmal signal intensity distribution patterns and configuration of unruptured cerebral aneurysms. Transluminal color-coded images were reconstructed from volume data of source magnetic resonance angiography by using a parallel volume-rendering algorithm with transluminal imaging technique. By selecting a numerical threshold range from a signal intensity opacity chart of the three-dimensional volume-rendering dataset several areas of signal intensity were depicted, assigned different colors, and visualized transparently through the walls of parent arteries and an aneurysm. Patterns of signal intensity distribution were analyzed with three operated cases of an unruptured anterior communicating artery aneurysm and compared with the actual configurations observed at microneurosurgery. A little difference in marginal features of an aneurysm was observed; however, transluminal color-coded images visualized the complex signal intensity distribution within an aneurysm in conjunction with aneurysmal geometry. Transluminal color-coded three-dimensional magnetic resonance angiography can thus provide numerical analysis of the interaction between spatial signal intensity distribution patterns and aneurysmal configurations and may offer an alternative and practical method to investigate the patient-specific natural history of individual unruptured cerebral aneurysms. (orig.)

  6. Confusion-limited extragalactic source survey at 4.755 GHz. I. Source list and areal distributions

    International Nuclear Information System (INIS)

    Ledden, J.E.; Broderick, J.J.; Condon, J.J.; Brown, R.L.

    1980-01-01

    A confusion-limited 4.755-GHz survey covering 0.00 956 sr between right ascensions 07/sup h/05/sup m/ and 18/sup h/ near declination +35 0 has been made with the NRAO 91-m telescope. The survey found 237 sources and is complete above 15 mJy. Source counts between 15 and 100 mJy were obtained directly. The P(D) distribution was used to determine the number counts between 0.5 and 13.2 mJy, to search for anisotropy in the density of faint extragalactic sources, and to set a 99%-confidence upper limit of 1.83 mK to the rms temperature fluctuation of the 2.7-K cosmic microwave background on angular scales smaller than 7.3 arcmin. The discrete-source density, normalized to the static Euclidean slope, falls off sufficiently rapidly below 100 mJy that no new population of faint flat-spectrum sources is required to explain the 4.755-GHz source counts

  7. Personalized reminiscence therapy M-health application for patients living with dementia: Innovating using open source code repository.

    Science.gov (United States)

    Zhang, Melvyn W B; Ho, Roger C M

    2017-01-01

    Dementia is known to be an illness which brings forth marked disability amongst the elderly individuals. At times, patients living with dementia do also experience non-cognitive symptoms, and these symptoms include that of hallucinations, delusional beliefs as well as emotional liability, sexualized behaviours and aggression. According to the National Institute of Clinical Excellence (NICE) guidelines, non-pharmacological techniques are typically the first-line option prior to the consideration of adjuvant pharmacological options. Reminiscence and music therapy are thus viable options. Lazar et al. [3] previously performed a systematic review with regards to the utilization of technology to delivery reminiscence based therapy to individuals who are living with dementia and has highlighted that technology does have benefits in the delivery of reminiscence therapy. However, to date, there has been a paucity of M-health innovations in this area. In addition, most of the current innovations are not personalized for each of the person living with Dementia. Prior research has highlighted the utility for open source repository in bioinformatics study. The authors hoped to explain how they managed to tap upon and make use of open source repository in the development of a personalized M-health reminiscence therapy innovation for patients living with dementia. The availability of open source code repository has changed the way healthcare professionals and developers develop smartphone applications today. Conventionally, a long iterative process is needed in the development of native application, mainly because of the need for native programming and coding, especially so if the application needs to have interactive features or features that could be personalized. Such repository enables the rapid and cost effective development of application. Moreover, developers are also able to further innovate, as less time is spend in the iterative process.

  8. Self characterization of a coded aperture array for neutron source imaging

    Energy Technology Data Exchange (ETDEWEB)

    Volegov, P. L., E-mail: volegov@lanl.gov; Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H. [Los Alamos National Laboratory, Los Alamos, New Mexico 87544 (United States); Fittinghoff, D. N. [Livermore National Laboratory, Livermore, California 94550 (United States)

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  9. Suggested Grid Code Modifications to Ensure Wide-Scale Adoption of Photovoltaic Energy in Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Enjeti, Prasad; Blaabjerg, Frede

    2013-01-01

    Current grid standards seem to largely require low power (e.g. several kilowatts) single-phase photovoltaic (PV) systems to operate at unity power factor with maximum power point tracking, and disconnect from the grid under grid faults. However, in case of a wide-scale penetration of single......-phase PV systems in the distributed grid, the disconnection under grid faults can contribute to: a) voltage flickers, b) power outages, and c) system instability. In this paper, grid code modifications are explored for wide-scale adoption of PV systems in the distribution grid. More recently, Italy...... and Japan, have undertaken a major review of standards for PV power conversion systems connected to low voltage networks. In view of this, the importance of low voltage ride-through for single-phase PV power systems under grid faults along with reactive power injection is studied in this paper. Three...

  10. Calculation of the radial and axial flux and power distribution for a CANDU 6 reactor with both the MCNP6 and Serpent codes

    International Nuclear Information System (INIS)

    Hussein, M.S.; Bonin, H.W.; Lewis, B.J.

    2014-01-01

    The most recent versions of the Monte Carlo-based probabilistic transport code MCNP6 and the continuous energy reactor physics burnup calculation code Serpent allow for a 3-D geometry calculation accounting for the detailed geometry without unit-cell homogenization. These two codes are used to calculate the axial and radial flux and power distributions for a CANDU6 GENTILLY-2 nuclear reactor core with 37-element fuel bundles. The multiplication factor, actual flux distribution and power density distribution were calculated by using a tally combination for MCNP6 and detector analysis for Serpent. Excellent agreement was found in the calculated flux and power distribution. The Serpent code is most efficient in terms of the computational time. (author)

  11. Calculation of the radial and axial flux and power distribution for a CANDU 6 reactor with both the MCNP6 and Serpent codes

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, M.S.; Bonin, H.W., E-mail: mohamed.hussein@rmc.ca, E-mail: bonin-h@rmc.ca [Royal Military College of Canada, Dept. of Chemistry and Chemical Engineering, Kingston, ON (Canada); Lewis, B.J., E-mail: Brent.Lewis@uoit.ca [Univ. of Ontario Inst. of Tech., Faculty of Energy Systems and Nuclear Science, Oshawa, ON (Canada)

    2014-07-01

    The most recent versions of the Monte Carlo-based probabilistic transport code MCNP6 and the continuous energy reactor physics burnup calculation code Serpent allow for a 3-D geometry calculation accounting for the detailed geometry without unit-cell homogenization. These two codes are used to calculate the axial and radial flux and power distributions for a CANDU6 GENTILLY-2 nuclear reactor core with 37-element fuel bundles. The multiplication factor, actual flux distribution and power density distribution were calculated by using a tally combination for MCNP6 and detector analysis for Serpent. Excellent agreement was found in the calculated flux and power distribution. The Serpent code is most efficient in terms of the computational time. (author)

  12. Characterization of 60Co dose distribution using BEAMnrc Monte Carlo code

    International Nuclear Information System (INIS)

    Abuissa, M. I. M.

    2012-12-01

    In this study BEAMnrc based on EGSnrc as Monte Carlo code has been used for modeling and simulating 6 0C o machine in radioisotope centre of Khartoum (RICK), Two fields size ( 5 cm x 5 cm and 35 cm x 35 cm), were been studied, to define the characterization of 6 0C o machine and to investigate the effect of increasing the surface to skin distance (SSD) on the 6 0C o machine properties, e.g.; beam profile and percentage depth dose (Pdd). For the narrow field size there is a small change observed in the curves representing beam profile and the percentage depth dose when increasing the distance by 5 cm, for the wide fi ld size there relatively clear different in curves. The study results been compared with other previous studies and clear consistence observed. (Author)

  13. Variable code gamma ray imaging system

    International Nuclear Information System (INIS)

    Macovski, A.; Rosenfeld, D.

    1979-01-01

    A gamma-ray source distribution in the body is imaged onto a detector using an array of apertures. The transmission of each aperture is modulated using a code such that the individual views of the source through each aperture can be decoded and separated. The codes are chosen to maximize the signal to noise ratio for each source distribution. These codes determine the photon collection efficiency of the aperture array. Planar arrays are used for volumetric reconstructions and circular arrays for cross-sectional reconstructions. 14 claims

  14. Studies on the supposition of liquid source for irradiation and its dose distribution, (1)

    International Nuclear Information System (INIS)

    Yoshimura, Seiji; Nishida, Tsuneo

    1977-01-01

    Recently radio isotope has been used and applied in the respective spheres. The application of the effects by irradiation will be specially paid attention to in the future. Today the source for irradiation has been considered to be the thing sealed in the solid state into various capsules. So we suppose that we use liquid radio isotope as the source for irradiation. This is because there are some advantages compared with the solid source in its freedom of the shape or additional easiness at attenuation. In these experiments we measured the dose distribution by the columnar liquid source. We expect that these will be put to practical use. (auth.)

  15. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    Science.gov (United States)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  16. Multimedia Cross–Platform Content Distribution for Mobile Peer–to–Peer Networks using Network Coding

    DEFF Research Database (Denmark)

    Pedersen, Morten Videbæk; Heide, Janus; Vingelmann, Peter

    2010-01-01

    communication. In this paper we will introduce a mobile application that runs on Symbian as well as iPhone/iPod de vices and is able to exchange multimedia content in a point to multipoint fashion. The mobile application coined PictureViewer can convey pictures from one source device to many neighboring devices...

  17. CELLDOSE: A Monte Carlo code to assess electron dose distribution - S values for 131I in spheres of various sizes

    International Nuclear Information System (INIS)

    Champion, C.; Zanotti-Fregonara, P.; Hindie, E; Hindie, E.

    2008-01-01

    Monte Carlo simulation can be particularly suitable for modeling the microscopic distribution of energy received by normal tissues or cancer cells and for evaluating the relative merits of different radiopharmaceuticals. We used a new code, CELLDOSE, to assess electron dose for isolated spheres with radii varying from 2,500 μm down to 0.05 μm, in which 131 I is homogeneously distributed. Methods: All electron emissions of 131 I were considered,including the whole β - 131 I spectrum, 108 internal conversion electrons, and 21 Auger electrons. The Monte Carlo track-structure code used follows all electrons down to an energy threshold E-cutoff 7.4 eV. Results: Calculated S values were in good agreement with published analytic methods, lying in between reported results for all experimental points. Our S values were also close to other published data using a Monte Carlo code. Contrary to the latter published results, our results show that dose distribution inside spheres is not homogeneous, with the dose at the outmost layer being approximately half that at the center. The fraction of electron energy retained within the spheres decreased with decreasing radius (r): 87.1 % for r 2,500 μm, 8.73% for r 50 μm, and 1.18% for r 5 μm. Thus, a radioiodine concentration that delivers a dose of 100 Gy to a micro-metastasis of 2,500 μm radius would deliver 10 Gy in a cluster of 50 μm and only 1.4 Gy in an isolated cell. The specific contribution from Auger electrons varied from 0.25% for the largest sphere up to 76.8% for the smallest sphere. Conclusion: The dose to a tumor cell will depend on its position in a metastasis. For the treatment of very small metastases, 131 I may not be the isotope of choice. When trying to kill isolated cells or a small cluster of cells with 131 I, it is important to get the iodine as close as possible to the nucleus to get the enhancement factor from Auger electrons. The Monte Carlo code CELLDOSE can be used to assess the electron map deposit

  18. Delaunay Tetrahedralization of the Heart Based on Integration of Open Source Codes

    International Nuclear Information System (INIS)

    Pavarino, E; Neves, L A; Machado, J M; Momente, J C; Zafalon, G F D; Pinto, A R; Valêncio, C R; Godoy, M F de; Shiyou, Y; Nascimento, M Z do

    2014-01-01

    The Finite Element Method (FEM) is a way of numerical solution applied in different areas, as simulations used in studies to improve cardiac ablation procedures. For this purpose, the meshes should have the same size and histological features of the focused structures. Some methods and tools used to generate tetrahedral meshes are limited mainly by the use conditions. In this paper, the integration of Open Source Softwares is presented as an alternative to solid modeling and automatic mesh generation. To demonstrate its efficiency, the cardiac structures were considered as a first application context: atriums, ventricles, valves, arteries and pericardium. The proposed method is feasible to obtain refined meshes in an acceptable time and with the required quality for simulations using FEM

  19. A calculation of dose distribution around 32P spherical sources and its clinical application

    International Nuclear Information System (INIS)

    Ohara, Ken; Tanaka, Yoshiaki; Nishizawa, Kunihide; Maekoshi, Hisashi

    1977-01-01

    In order to avoid the radiation hazard in radiation therapy of craniopharyngioma by using 32 P, it is helpful to prepare a detailed dose distribution in the vicinity of the source in the tissue. Valley's method is used for calculations. A problem of the method is pointed out and the method itself is refined numerically: it extends a region of xi where an approximate polynomial is available, and it determines an optimum degree of the polynomial as 9. Usefulness of the polynomial is examined by comparing with Berger's scaled absorbed dose distribution F(xi) and the Valley's result. The dose and dose rate distributions around uniformly distributed spherical sources are computed from the termwise integration of our polynomial of degree 9 over the range of xi from 0 to 1.7. The dose distributions calculated from the spherical surface to a point at 0.5 cm outside the source, are given, when the radii of sources are 0.5, 0.6, 0.7, 1.0, and 1.5 cm respectively. The therapeutic dose for a craniopharyngioma which has a spherically shaped cyst, and the absorbed dose to the normal tissue, (oculomotor nerve), are obtained from these dose rate distributions. (auth.)

  20. Assessment of pin-by-pin fission rate distribution within MOX/UO{sub 2} fuel assembly using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Louis, Heba Kareem; Amin, Esmat [Nuclear and Radiological Regulation Authority (NRRA), Cairo (Egypt). Safety Engineering Dept.

    2016-03-15

    The aim of the present paper is to assess the calculations of pin-by-pin group integrated fission rates within MOX/UO{sub 2} Fuel assemblies using the Monte Carlo code MCNP2.7c with two sets of the available latest nuclear data libraries used for calculating MOX-fueled systems. The data that are used in this paper are based on the benchmark by the NEA Nuclear Science Committee (NSC). The k{sub ∞} and absorption/fission reaction rates per isotope, k{sub eff} and pin-by-pin group integrated fission rates on 1/8 fraction of the geometry are determined. To assess the overall pin-by-pin fission rate distribution, the collective per cent error measures were investigated. The results of AVG, MRE and RMS error measures were less than 1 % error. The present results are compared with other participants using other Monte Carlo codes and with CEA results that were taken in the benchmark as reference. The results with ENDF/B-VI.6 are close to the results received by MVP (JENDL3.2) and SCALE 4.2 (JEF2.2). The results with ENDF/BVII.1 give higher values of k{sub ∞} reflecting the changes in the newer evaluations. In almost all results presented here, the MCNP calculated results with ENDF/B VII.1 should be considered more than those obtained by using other Monte Carlo codes and nuclear data libraries. The present calculations may be consider a reference for evaluating the numerical schemes in production code systems, as well as the global performance including cross-section data reduction methods as the calculations used continuous energy and no geometrical approximations.

  1. The Impact of Source Distribution on Scalar Transport over Forested Hills

    Science.gov (United States)

    Ross, Andrew N.; Harman, Ian N.

    2015-08-01

    Numerical simulations of neutral flow over a two-dimensional, isolated, forested ridge are conducted to study the effects of scalar source distribution on scalar concentrations and fluxes over forested hills. Three different constant-flux sources are considered that span a range of idealized but ecologically important source distributions: a source at the ground, one uniformly distributed through the canopy, and one decaying with depth in the canopy. A fourth source type, where the in-canopy source depends on both the wind speed and the difference in concentration between the canopy and a reference concentration on the leaf, designed to mimic deposition, is also considered. The simulations show that the topographically-induced perturbations to the scalar concentration and fluxes are quantitatively dependent on the source distribution. The net impact is a balance of different processes affecting both advection and turbulent mixing, and can be significant even for moderate topography. Sources that have significant input in the deep canopy or at the ground exhibit a larger magnitude advection and turbulent flux-divergence terms in the canopy. The flows have identical velocity fields and so the differences are entirely due to the different tracer concentration fields resulting from the different source distributions. These in-canopy differences lead to larger spatial variations in above-canopy scalar fluxes for sources near the ground compared to cases where the source is predominantly located near the canopy top. Sensitivity tests show that the most significant impacts are often seen near to or slightly downstream of the flow separation or reattachment points within the canopy flow. The qualitative similarities to previous studies using periodic hills suggest that important processes occurring over isolated and periodic hills are not fundamentally different. The work has important implications for the interpretation of flux measurements over forests, even in

  2. The electron-dose distribution surrounding an 192Ir wire bracytherapy source investigated using EGS4 simulations and GafChromic film

    International Nuclear Information System (INIS)

    Cheung, Y.C.; Yu, P.K.N.; Young, E.C.M.; Wong, T.P.Y.

    1997-01-01

    The steep dose gradient around 192 Ir brachytherapy wire implants is predicted by the EGS4 (PRESTA version) Monte Carlo simulation. When considering radiation absorbing regions close to the wire source, the accurate dose distribution cannot be calculated by the GE Target II Sun Sparc treatment-planning system. Experiments using GafChromic TM film have been performed to prove the validity of the EGS4 user code when calculating the dose close to the wire source in a low energy range. (Author)

  3. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    Science.gov (United States)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  4. Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC

    Science.gov (United States)

    Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik

    2017-10-01

    XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.

  5. Blind cooperative diversity using distributed space-time coding in block fading channels

    KAUST Repository

    Tourki, Kamel

    2010-08-01

    Mobile users with single antennas can still take advantage of spatial diversity through cooperative space-time encoded transmission. In this paper, we consider a scheme in which a relay chooses to cooperate only if its source-relay channel is of an acceptable quality and we evaluate the usefulness of relaying when the source acts blindly and ignores the decision of the relays whether they may cooperate or not. In our study, we consider the regenerative relays in which the decisions to cooperate are based on a signal-to-noise ratio (SNR) threshold and consider the impact of the possible erroneously detected and transmitted data at the relays. We derive the end-to-end bit-error rate (BER) expression and its approximation for binary phase-shift keying modulation and look at two power allocation strategies between the source and the relays in order to minimize the end-to-end BER at the destination for high SNR. Some selected performance results show that computer simulations based results coincide well with our analytical results. © 2010 IEEE.

  6. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders.   Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partne...

  7. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders. Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partners.

  8. Numeral series hidden in the distribution of atomic mass of amino acids to codon domains in the genetic code.

    Science.gov (United States)

    Wohlin, Åsa

    2015-03-21

    The distribution of codons in the nearly universal genetic code is a long discussed issue. At the atomic level, the numeral series 2x(2) (x=5-0) lies behind electron shells and orbitals. Numeral series appear in formulas for spectral lines of hydrogen. The question here was if some similar scheme could be found in the genetic code. A table of 24 codons was constructed (synonyms counted as one) for 20 amino acids, four of which have two different codons. An atomic mass analysis was performed, built on common isotopes. It was found that a numeral series 5 to 0 with exponent 2/3 times 10(2) revealed detailed congruency with codon-grouped amino acid side-chains, simultaneously with the division on atom kinds, further with main 3rd base groups, backbone chains and with codon-grouped amino acids in relation to their origin from glycolysis or the citrate cycle. Hence, it is proposed that this series in a dynamic way may have guided the selection of amino acids into codon domains. Series with simpler exponents also showed noteworthy correlations with the atomic mass distribution on main codon domains; especially the 2x(2)-series times a factor 16 appeared as a conceivable underlying level, both for the atomic mass and charge distribution. Furthermore, it was found that atomic mass transformations between numeral systems, possibly interpretable as dimension degree steps, connected the atomic mass of codon bases with codon-grouped amino acids and with the exponent 2/3-series in several astonishing ways. Thus, it is suggested that they may be part of a deeper reference system. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.

  9. Z-Source-Inverter-Based Flexible Distributed Generation System Solution for Grid Power Quality Improvement

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Vilathgamuwa, D. M.; Loh, Poh Chiang

    2009-01-01

    Distributed generation (DG) systems are usually connected to the grid using power electronic converters. Power delivered from such DG sources depends on factors like energy availability and load demand. The converters used in power conversion do not operate with their full capacity all the time......-stage buck-boost inverter, recently proposed Z-source inverter (ZSI) is a good candidate for future DG systems. This paper presents a controller design for a ZSI-based DG system to improve power quality of distribution systems. The proposed control method is tested with simulation results obtained using...

  10. MCNP(TM) Release 6.1.1 beta: Creating and Testing the Code Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Cox, Lawrence J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Casswell, Laura [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-06-12

    This report documents the preparations for and testing of the production release of MCNP6™1.1 beta through RSICC at ORNL. It addresses tests on supported operating systems (Linux, MacOSX, Windows) with the supported compilers (Intel, Portland Group and gfortran). Verification and Validation test results are documented elsewhere. This report does not address in detail the overall packaging of the distribution. Specifically, it does not address the nuclear and atomic data collection, the other included software packages (MCNP5, MCNPX and MCNP6) and the collection of reference documents.

  11. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104)

    International Nuclear Information System (INIS)

    Kress, T.S.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time

  12. Calculation of the secondary gamma radiation by the Monte Carlo method at displaced sampling from distributed sources

    International Nuclear Information System (INIS)

    Petrov, Eh.E.; Fadeev, I.A.

    1979-01-01

    A possibility to use displaced sampling from a bulk gamma source in calculating the secondary gamma fields by the Monte Carlo method is discussed. The algorithm proposed is based on the concept of conjugate functions alongside the dispersion minimization technique. For the sake of simplicity a plane source is considered. The algorithm has been put into practice on the M-220 computer. The differential gamma current and flux spectra in 21cm-thick lead have been calculated. The source of secondary gamma-quanta was assumed to be a distributed, constant and isotropic one emitting 4 MeV gamma quanta with the rate of 10 9 quanta/cm 3 xs. The calculations have demonstrated that the last 7 cm of lead are responsible for the whole gamma spectral pattern. The spectra practically coincide with the ones calculated by the ROZ computer code. Thus the algorithm proposed can be offectively used in the calculations of secondary gamma radiation transport and reduces the computation time by 2-4 times

  13. Monte Carlo simulation of scatter in non-uniform symmetrical attenuating media for point and distributed sources

    International Nuclear Information System (INIS)

    Henry, L.J.; Rosenthal, M.S.

    1992-01-01

    We report results of scatter simulations for both point and distributed sources of 99m Tc in symmetrical non-uniform attenuating media. The simulations utilized Monte Carlo techniques and were tested against experimental phantoms. Both point and ring sources were used inside a 10.5 cm radius acrylic phantom. Attenuating media consisted of combinations of water, ground beef (to simulate muscle mass), air and bone meal (to simulate bone mass). We estimated/measured energy spectra, detector efficiencies and peak height ratios for all cases. In all cases, the simulated spectra agree with the experimentally measured spectra within 2 SD. Detector efficiencies and peak height ratios also are in agreement. The Monte Carlo code is able to properly model the non-uniform attenuating media used in this project. With verification of the simulations, it is possible to perform initial evaluation studies of scatter correction algorithms by evaluating the mechanisms of action of the correction algorithm on the simulated spectra where the magnitude and sources of scatter are known. (author)

  14. TRAN.1 - a code for transient analysis of temperature distribution in a nuclear fuel channel

    International Nuclear Information System (INIS)

    Bukhari, K.M.

    1990-09-01

    A computer program has been written in FORTRAN that solves the time dependent energy conservation equations in a nuclear fuel channel. As output from the program we obtained the temperature distribution in the fuel, cladding and coolant as a function of space and time. The stability criteria have also been developed. A set of finite difference equations for the steady state temperature distribution have also been incorporated in this program. A number of simplifications have been made in this version of the program. Thus at present, TRAN.1 uses constant thermodynamics properties and heat transfer coefficient at fuel cladding gap, has absence of phase change and pressure loss in the coolant, and there is no change in properties due to changes in burnup etc. These effects are now in the process of being included in the program. The current version of program should therefore be taken as a fuel channel, and this report should be considered as a status report on this program. (orig./A.B.)

  15. Characterization of a Distributed Plasma Ionization Source (DPIS) for Ion Mobility Spectrometry and Mass Spectrometry

    International Nuclear Information System (INIS)

    Waltman, Melanie J.; Dwivedi, Prabha; Hill, Herbert; Blanchard, William C.; Ewing, Robert G.

    2008-01-01

    A recently developed atmospheric pressure ionization source, a distributed plasma ionization source (DPIS), was characterized and compared to commonly used atmospheric pressure ionization sources with both mass spectrometry and ion mobility spectrometry. The source consisted of two electrodes of different sizes separated by a thin dielectric. Application of a high RF voltage across the electrodes generated plasma in air yielding both positive and negative ions depending on the polarity of the applied potential. These reactant ions subsequently ionized the analyte vapors. The reactant ions generated were similar to those created in a conventional point-to-plane corona discharge ion source. The positive reactant ions generated by the source were mass identified as being solvated protons of general formula (H2O)nH+ with (H2O)2H+ as the most abundant reactant ion. The negative reactant ions produced were mass identified primarily as CO3-, NO3-, NO2-, O3- and O2- of various relative intensities. The predominant ion and relative ion ratios varied depending upon source construction and supporting gas flow rates. A few compounds including drugs, explosives and environmental pollutants were selected to evaluate the new ionization source. The source was operated continuously for several months and although deterioration was observed visually, the source continued to produce ions at a rate similar that of the initial conditions. The results indicated that the DPIS may have a longer operating life than a conventional corona discharge.

  16. Distributed chemical computing using ChemStar: an open source java remote method invocation architecture applied to large scale molecular data from PubChem.

    Science.gov (United States)

    Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander

    2008-04-01

    We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.

  17. Power Law Distributions in the Experiment for Adjustment of the Ion Source of the NBI System

    International Nuclear Information System (INIS)

    Han Xiaopu; Hu Chundong

    2005-01-01

    The experiential adjustment process in an experiment on the ion source of the neutral beam injector system for the HT-7 Tokamak is reported in this paper. With regard to the data obtained in the same condition, in arranging the arc current intensities of every shot with a decay rank, the distributions of the arc current intensity correspond to the power laws, and the distribution obtained in the condition with the cryo-pump corresponds to the double Pareto distribution. Using the similar study method, the distributions of the arc duration are close to the power laws too. These power law distributions are formed rather naturally instead of being the results of purposeful seeking

  18. Evaluation of the dose distribution for prostate implants using various 125I and 103Pd sources

    International Nuclear Information System (INIS)

    Meigooni, Ali S.; Luerman, Christine M.; Sowards, Keith T.

    2009-01-01

    Recently, several different models of 125 I and 103 Pd brachytherapy sources have been introduced in order to meet the increasing demand for prostate seed implants. These sources have different internal structures; hence, their TG-43 dosimetric parameters are not the same. In this study, the effects of the dosimetric differences among the sources on their clinical applications were evaluated. The quantitative and qualitative evaluations were performed by comparisons of dose distributions and dose volume histograms of prostate implants calculated for various designs of 125 I and 103 Pd sources. These comparisons were made for an identical implant scheme with the same number of seeds for each source. The results were compared with the Amersham model 6711 seed for 125 I and the Theragenics model 200 seed for 103 Pd using the same implant scheme.

  19. Angular and mass resolved energy distribution measurements with a gallium liquid metal ion source

    International Nuclear Information System (INIS)

    Marriott, Philip

    1987-06-01

    Ionisation and energy broadening mechanisms relevant to liquid metal ion sources are discussed. A review of experimental results giving a picture of source operation and a discussion of the emission mechanisms thought to occur for the ionic species and droplets emitted is presented. Further work is suggested by this review and an analysis system for angular and mass resolved energy distribution measurements of liquid metal ion source beams has been constructed. The energy analyser has been calibrated and a series of measurements, both on and off the beam axis, of 69 Ga + , Ga ++ and Ga 2 + ions emitted at various currents from a gallium source has been performed. A comparison is made between these results and published work where possible, and the results are discussed with the aim of determining the emission and energy spread mechanisms operating in the gallium liquid metal ion source. (author)

  20. A k-distribution-based radiation code and its computational optimization for an atmospheric general circulation model

    International Nuclear Information System (INIS)

    Sekiguchi, Miho; Nakajima, Teruyuki

    2008-01-01

    The gas absorption process scheme in the broadband radiative transfer code 'mstrn8', which is used to calculate atmospheric radiative transfer efficiently in a general circulation model, is improved. Three major improvements are made. The first is an update of the database of line absorption parameters and the continuum absorption model. The second is a change to the definition of the selection rule for gas absorption used to choose which absorption bands to include. The last is an upgrade of the optimization method used to decrease the number of quadrature points used for numerical integration in the correlated k-distribution approach, thereby realizing higher computational efficiency without losing accuracy. The new radiation package termed 'mstrnX' computes radiation fluxes and heating rates with errors less than 0.6 W/m 2 and 0.3 K/day, respectively, through the troposphere and the lower stratosphere for any standard AFGL atmospheres. A serious cold bias problem of an atmospheric general circulation model using the ancestor code 'mstrn8' is almost solved by the upgrade to 'mstrnX'

  1. Calculation of the power distribution in the fuel rods of the low power research reactor using the MCNP4C code

    International Nuclear Information System (INIS)

    Dawahra, S.; Khattab, K.

    2012-01-01

    The Monte Carlo method, using the MCNP4C code, was used in this paper to calculate the power distribution in 3-D geometry in the fuel rods of the Syrian Miniature Neutron Source Reactor (MNSR). To normalize the MCNP4C result to the steady state nominal thermal power, the appropriate scaling factor was defined to calculate the power distribution precisely. The maximum power of the individual rod was found in the fuel ring number 2 and was found to be 105 W. The minimum power was found in the fuel ring number 9 and was 79.9 W. The total power in the total fuel rods was 30.9 k W. This result agrees very well with nominal power reported in the reactor safety analysis report which equals 30 k W. Finally, the peak power factors, which are defined as the ratios between the maximum to the average and the maximum to the minimum powers were calculated to be 1.18 and 1.31 respectively. (author)

  2. Specific model for a gas distribution analysis in the containment at Almaraz NPP using GOTHIC computer code

    International Nuclear Information System (INIS)

    García González, M.; García Jiménez, P.; Martínez Domínguez, F.

    2016-01-01

    To carry out an analysis of the distribution of gases within the containment building at the CN Almaraz site, a simulation model with the thermohydraulic GOTHIC [1] code has been used. This has been assessed with a gas control system based on passive autocatalytic recombiners (PARs). The model is used to test the effectiveness of the control systems for gases to be used in the Almaraz Nuclear Power Plant, Uits I&II (Caceres, Spain, 1,035 MW and 1,044 MW). The model must confirm the location and number of the recombiners proposed to be installed. It is an essential function of the gas control system to avoid any formation of explosive atmospheres by reducing and limiting the concentration of combustible gases during an accident, thus maintaining the integrity of the containment. The model considers severe accident scenarios with specific conditions that produce the most onerous generation of combustible gases.

  3. RAGRAF: a computer code for calculating temperature distributions in multi-pin fuel assemblies in a stagnant gas atmosphere

    International Nuclear Information System (INIS)

    Eastham, A.

    1979-02-01

    A method of calculating the temperature distribution in a cross-section of a multi-pin nuclear reactor fuel assembly has been computerised. It utilises the thermal radiation interchange between individual fuel pins in either a square or triangular pitched lattice. A stagnant gas atmosphere within the fuel assembly is assumed which inhibits natural convection but permits thermal conduction between adjacent fuel pins. no restriction is placed upon the shape of wrapper used, but its temperature must always be uniform. RAGRAF has great flexibility because of the many options it provides. Although, essentially, it is a transient code, steady state solutions may be readily identified from successive temperature prints. An enclosure for the assembly wrapper is available, to be included or discarded at will during transient calculations. outside the limit of the assembly wrapper, any type or combination of heat transfer mode may be included. Transient variations in boundary temperature may be included if required. (author)

  4. Light source distribution and scattering phase function influence light transport in diffuse multi-layered media

    Science.gov (United States)

    Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine

    2017-06-01

    Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.

  5. The dislocation distribution function near a crack tip generated by external sources

    International Nuclear Information System (INIS)

    Lung, C.W.; Deng, K.M.

    1988-06-01

    The dislocation distribution function near a crack tip generated by external sources is calculated. It is similar to the shape of curves calculated for the crack tip emission case but the quantative difference is quite large. The image forces enlarges the negative dislocation zone but does not change the form of the curve. (author). 10 refs, 3 figs

  6. Distribution of hadron intranuclear cascade for large distance from a source

    International Nuclear Information System (INIS)

    Bibin, V.L.; Kazarnovskij, M.V.; Serezhnikov, S.V.

    1985-01-01

    Analytical solution of the problem of three-component hadron cascade development for large distances from a source is obtained in the framework of a series of simplifying assumptions. It makes possible to understand physical mechanisms of the process studied and to obtain approximate asymptotic expressions for hadron distribution functions

  7. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    Science.gov (United States)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  8. The Space-, Time-, and Energy-distribution of Neutrons from a Pulsed Plane Source

    Energy Technology Data Exchange (ETDEWEB)

    Claesson, Arne

    1962-05-15

    The space-, time- and energy-distribution of neutrons from a pulsed, plane, high energy source in an infinite medium is determined in a diffusion approximation. For simplicity the moderator is first assumed to be hydrogen gas but it is also shown that the method can be used for a moderator of arbitrary mass.

  9. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  10. Establishment of a Practical Approach for Characterizing the Source of Particulates in Water Distribution Systems

    Directory of Open Access Journals (Sweden)

    Seon-Ha Chae

    2016-02-01

    Full Text Available Water quality complaints related to particulate matter and discolored water can be troublesome for water utilities in terms of follow-up investigations and implementation of appropriate actions because particulate matter can enter from a variety of sources; moreover, physicochemical processes can affect the water quality during the purification and transportation processes. The origin of particulates can be attributed to sources such as background organic/inorganic materials from water sources, water treatment plants, water distribution pipelines that have deteriorated, and rehabilitation activities in the water distribution systems. In this study, a practical method is proposed for tracing particulate sources. The method entails collecting information related to hydraulic, water quality, and structural conditions, employing a network flow-path model, and establishing a database of physicochemical properties for tubercles and slimes. The proposed method was implemented within two city water distribution systems that were located in Korea. These applications were conducted to demonstrate the practical applicability of the method for providing solutions to customer complaints. The results of the field studies indicated that the proposed method would be feasible for investigating the sources of particulates and for preparing appropriate action plans for complaints related to particulate matter.

  11. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    Energy Technology Data Exchange (ETDEWEB)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk [Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  12. Modelisation and distribution of neutron flux in radium-beryllium source (226Ra-Be)

    Science.gov (United States)

    Didi, Abdessamad; Dadouch, Ahmed; Jai, Otman

    2017-09-01

    Using the Monte Carlo N-Particle code (MCNP-6), to analyze the thermal, epithermal and fast neutron fluxes, of 3 millicuries of radium-beryllium, for determine the qualitative and quantitative of many materials, using method of neutron activation analysis. Radium-beryllium source of neutron is established to practical work and research in nuclear field. The main objective of this work was to enable us harness the profile flux of radium-beryllium irradiation, this theoretical study permits to discuss the design of the optimal irradiation and performance for increased the facility research and education of nuclear physics.

  13. FDTD verification of deep-set brain tumor hyperthermia using a spherical microwave source distribution

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, D. [20th Intelligence Squadron, Offutt AFB, NE (United States); Rappaport, C.M. [Northeastern Univ., Boston, MA (United States). Center for Electromagnetics Research; Terzuoli, A.J. Jr. [Air Force Inst. of Tech., Dayton, OH (United States). Graduate School of Engineering

    1996-10-01

    Although use of noninvasive microwave hyperthermia to treat cancer is problematic in many human body structures, careful selection of the source electric field distribution around the entire surface of the head can generate a tightly focused global power density maximum at the deepest point within the brain. An analytic prediction of the optimum volume field distribution in a layered concentric head model based on summing spherical harmonic modes is derived and presented. This ideal distribution is then verified using a three-dimensional finite difference time domain (TDTD) simulation with a discretized, MRI-based head model excited by the spherical source. The numerical computation gives a very similar dissipated power pattern as the analytic prediction. This study demonstrates that microwave hyperthermia can theoretically be a feasible cancer treatment modality for tumors in the head, providing a well-resolved hot-spot at depth without overheating any other healthy tissue.

  14. Use of Monte Carlo Methods in the modeling of the dose/INAK distribution of natural radioactive sources: First studies

    Energy Technology Data Exchange (ETDEWEB)

    Bezerra, Luis R.A.; Vieira, Jose W.; Amaral, Romilton dos S.; Santos Junior, Jose A. dos; Silva, Arykerne N.C. da; Silva, Alberto A. da; Damascena, Kennedy F.; Santos Junior, Otavio P.; Medeiros, Nilson V.S.; Santos, Josineide M.N. dos, E-mail: jaraujo@ufpe.br, E-mail: romilton@ufpe.br, E-mail: kennedy.eng.ambiental@gmail.com, E-mail: nvsmedeiros@gmail.com, E-mail: josineide.santos@ufpe.br, E-mail: arykerne.silva@ufpe.br, E-mail: luis.rodrigo@vitoria.ifpe.edu.br, E-mail: otavio.santos@vitoria.ifpe.edu.br, E-mail: s, E-mail: jose.wilson@recife.ifpe.edu.br, E-mail: alberto.silva@barreiros.ifpe.edu.br, E-mail: jose.wilson59@uol.com.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), PE (Brazil); Universidade de Pernambuco (UPE), Recife, PE (Brazil)

    2017-11-01

    One of the means of exposure that the world population is subjected to daily is natural radiation, which covers exposure to sources of cosmic origin and terrestrial origin, which accounts for about 84.1% of all exposure due to natural radiation. Some research groups have been estimating the distribution of the dose by the radiosensitive organs and tissues of people submitted to gamma radiation using Computational Exposure Models (MCE). The MCE is composed, fundamentally, of an anthropomorphic simulator (phantom), a Monte Carlo code and a radioactive source algorithm. The Group of Computational Dosimetry and Embedded Systems (DCSE), together with the group of Radioecology (RAE), have been developing a variety of MCEs to simulate exposure to natural environmental gamma radiation. Such models estimate the dose distribution absorbed by the organs and tissues radiosensitive to ionizing radiation from a flat portion of the ground in which photons emerge from within a circle of radius r, reaching a person in an orthostatic position and centered on the circumference. We investigated in this work the exposure of an individual by a radioactive cloud of gamma emission of Potassium-40, which emits a photon characteristic of energy 1461 keV. It was optimized the number of histories to obtain Dose/Kerma values in the air, with low dispersion and viable computational time for the available PCs, statistically validating the results. To do so, was adapted the MCE MSTA, composed by the MASH (Male Adult meSH) phantom in an orthostatic position coupled to the EGSnrc, with the planar source algorithm. (author)

  15. Assessment of gamma irradiation heating and damage in miniature neutron source reactor vessel using computational methods and SRIM - TRIM code

    International Nuclear Information System (INIS)

    Appiah-Ofori, F. F.

    2014-07-01

    The Effects of Gamma Radiation Heating and Irradiation Damage in the reactor vessel of Ghana Research Reactor 1, Miniature Neutron Source Reactor were assessed using Implicit Control Volume Finite Difference Numerical Computation and validated by SRIM - TRIM Code. It was assumed that 5.0 MeV of gamma rays from the reactor core generate heat which interact and absorbed completely by the interior surface of the MNSR vessel which affects it performance due to the induced displacement damage. This displacement damage is as result of lattice defects being created which impair the vessel through formation of point defect clusters such as vacancies and interstitiaIs which can result in dislocation loops and networks, voids and bubbles and causing changes in the layers in the thickness of the vessel. The microscopic defects produced in the vessel due to γ - radiation damage are referred to as radiation damage while the defects thus produced modify the macroscopic properties of the vessel which are also known as the radiation effects. These radiation damage effects are of major concern for materials used in nuclear energy production. In this study, the overall objective was to assess the effects of gamma radiation heating and damage in GHARR - I MNSR vessel by a well-developed Mathematical model, Analytical and Numerical solutions, simulating the radiation damage in the vessel. SRIM - TRIM Code was used as a computational tool to determine the displacement per atom (dpa) associated with radiation damage while implicit Control Volume Finite Difference Method was used to determine the temperature profile within the vessel due to γ - radiation heating respectively. The methodology adopted in assessing γ - radiation heating in the vessel involved development of the One-Dimensional Steady State Fourier Heat Conduction Equation with Volumetric Heat Generation both analytical and implicit Control Volume Finite Difference Method approach to determine the maximum temperature and

  16. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    International Nuclear Information System (INIS)

    O'Connor, Evan; Ott, Christian D

    2010-01-01

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M o-dot zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  17. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    Energy Technology Data Exchange (ETDEWEB)

    O' Connor, Evan; Ott, Christian D, E-mail: evanoc@tapir.caltech.ed, E-mail: cott@tapir.caltech.ed [TAPIR, Mail Code 350-17, California Institute of Technology, Pasadena, CA 91125 (United States)

    2010-06-07

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M{sub o-dot} zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  18. CONHOR. Code system for determination of power distribution and burnup for the HOR reactor. Version 1.0.. User's manual

    International Nuclear Information System (INIS)

    Serov, I.V.; Hoogenboom, J.E.

    1993-07-01

    The main calculational tool is the CITATION code. CITATION is used for both static and burnup calculations. The pointwise flux density and power distributions obtained from these calculations are used to obtain the values of the desired quantities at the beginning of a burnup cycle. To obtain the most trustful values of the desired quantities CONHOR employs experimental information together with the CITATION calculated flux distributions. Axially averaged foil activation rates are obtained based on both CITATION pointwise flux density distributions and measured foil activity counts. These two sets of activation rates are called the distributions of auxiliary quantities and are compared with each other in order to pick up the corrections to the U-235 number densities in fuel containing elements. The methodical corrections to the calculational auxiliary quantities are obtained on this basis as well. They are used to obtain the methodical corrections to the desired quantities. The corrected desired quantities are the recommended ones. The correction procedure requires the knowledge of the sensitivity coefficients of the average foil activation rates with respect to the U-235 number densities (through the text of this manual U-235 is denoted also and especially in the input-output description sections as a BUrning-COrrected material, or 'BuCo' material). These sensitivity coefficients are calculated by the CONHOR SENS module. CITATION is employed to perform the calculations with perturbed values of U-235 number densities. Burnup calculations can be performed being based on either corrected or uncorrected U-235 number densities. Through the text of this manual XXXX means a 4-symbol identification of the burnup cycle to be studied. XX-1 and XX+1 mean correspondingly the previous and the following cycles. (orig./HP)

  19. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junjie Ma

    2018-02-01

    Full Text Available Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  20. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    Science.gov (United States)

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  1. Radial dose distribution of 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Higgins, P.

    1989-01-01

    The radial dose distributions in water around /sup 192/ Ir seed sources with both platinum and stainless steel encapsulation have been measured using LiF thermoluminescent dosimeters (TLD) for distances of 1 to 12 cm along the perpendicular bisector of the source to determine the effect of source encapsulation. Similar measurements also have been made around a /sup 137/ Cs seed source of comparable dimensions. The data were fit to a third order polynomial to obtain an empirical equation for the radial dose factor which then can be used in dosimetry. The coefficients of this equation for each of the three sources are given. The radial dose factor of the stainless steel encapsulated /sup 192/ Ir and that of the platinum encapsulated /sup 192/ Ir agree to within 2%. The radial dose distributions measured here for /sup 192/ Ir with either type of encapsulation and for /sup 137/ Cs are indistinguishable from those of other authors when considering uncertainties involved. For clinical dosimetry based on isotropic point or line source models, any of these equations may be used without significantly affecting accuracy

  2. LIGHT CURVES OF CORE-COLLAPSE SUPERNOVAE WITH SUBSTANTIAL MASS LOSS USING THE NEW OPEN-SOURCE SUPERNOVA EXPLOSION CODE (SNEC)

    International Nuclear Information System (INIS)

    Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.; Clausen, Drew; Couch, Sean M.; Ellis, Justin; Roberts, Luke F.; Piro, Anthony L.

    2015-01-01

    We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M ⊙ (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. The resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M ⊙ of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M ⊙ , hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R ⊙

  3. LIGHT CURVES OF CORE-COLLAPSE SUPERNOVAE WITH SUBSTANTIAL MASS LOSS USING THE NEW OPEN-SOURCE SUPERNOVA EXPLOSION CODE (SNEC)

    Energy Technology Data Exchange (ETDEWEB)

    Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.; Clausen, Drew; Couch, Sean M.; Ellis, Justin; Roberts, Luke F. [TAPIR, Walter Burke Institute for Theoretical Physics, MC 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Piro, Anthony L., E-mail: morozvs@tapir.caltech.edu [Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101 (United States)

    2015-11-20

    We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M{sub ⊙} (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. The resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M{sub ⊙} of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M{sub ⊙}, hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R{sub ⊙}.

  4. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    Science.gov (United States)

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  5. Documentation of Source Code.

    Science.gov (United States)

    1988-05-12

    the "load IC" menu option. A prompt will appear in the typescript window requesting the name of the knowledge base to be loaded. Enter...highlighted and then a prompt will appear in the typescript window. The prompt will be requesting the name of the file containing the message to be read in...the file name, the system will begin reading in the message. The listified message is echoed back in the typescript window. After that, the screen

  6. A practical two-way system of quantum key distribution with untrusted source

    International Nuclear Information System (INIS)

    Chen Ming-Juan; Liu Xiang

    2011-01-01

    The most severe problem of a two-way 'plug-and-play' (p and p) quantum key distribution system is that the source can be controlled by the eavesdropper. This kind of source is defined as an “untrusted source . This paper discusses the effects of the fluctuation of internal transmittance on the final key generation rate and the transmission distance. The security of the standard BB84 protocol, one-decoy state protocol, and weak+vacuum decoy state protocol, with untrusted sources and the fluctuation of internal transmittance are studied. It is shown that the one-decoy state is sensitive to the statistical fluctuation but weak+vacuum decoy state is only slightly affected by the fluctuation. It is also shown that both the maximum secure transmission distance and final key generation rate are reduced when Alice's laboratory transmittance fluctuation is considered. (general)

  7. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    Directory of Open Access Journals (Sweden)

    Xiang Gao

    2016-07-01

    Full Text Available This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  8. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  9. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    Science.gov (United States)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  10. ZOCO VI - a computer code to calculate the time- and space-dependent pressure distribution in full pressure containments of water-cooled reactors

    International Nuclear Information System (INIS)

    Mansfeld, G.

    1974-12-01

    ZOCO VI is a computer code to investigate the time and space dependent pressure distribution in full pressure containment of water cooled nuclear power reactors following a loss-of-coolant accident, which is caused by the rupture of a main coolant or steam line. ZOCO VI is an improved version of the computer code ZOCO V with enlarged description of condensing events. (orig.) [de

  11. Implementation of inter-unit analysis for C and C++ languages in a source-based static code analyzer

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The proliferation of automated testing capabilities arises a need for thorough testing of large software systems, including system inter-component interfaces. The objective of this research is to build a method for inter-procedural inter-unit analysis, which allows us to analyse large and complex software systems including multi-architecture projects (like Android OS as well as to support complex assembly systems of projects. Since the selected Clang Static Analyzer uses source code directly as input data, we need to develop a special technique to enable inter-unit analysis for such analyzer. This problem is of special nature because of C and C++ language features that assume and encourage the separate compilation of project files. We describe the build and analysis system that was implemented around Clang Static Analyzer to enable inter-unit analysis and consider problems related to support of complex projects. We also consider the task of merging abstract source trees of translation units and its related problems such as handling conflicting definitions, complex build systems and complex projects support, including support for multi-architecture projects, with examples. We consider both issues related to language design and human-related mistakes (that may be intentional. We describe some heuristics that were used for this work to make the merging process faster. The developed system was tested using Android OS as the input to show it is applicable even for such complicated projects. This system does not depend on the inter-procedural analysis method and allows the arbitrary change of its algorithm.

  12. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    Science.gov (United States)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  13. The effect of temperature and the control rod position on the spatial neutron flux distribution in the Syrian Miniature Neutron Source Reactor

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2007-01-01

    The effect of water and fuel temperature increase and changes in the control rod positions on the spatial neutron flux distribution in the Syrian Miniature Neutron Source Reactor (MNSR) is discussed. The cross sections of all the reactor components at different temperatures are generated using the WIMSD4 code. These group constants are used then in the CITATION code to calculate the special neutron flux distribution using four energy groups. This work shows that water and fuel temperature increase in the reactor during the reactor daily operating time does not affect the spatial neutron flux distribution in the reactor. Changing the control rod position does not affect as well the spatial neutron flux distribution except in the region around the control rod position. This stability in the spatial neutron flux distribution, especially in the inner and outer irradiation sites, makes MNSR as a good tool for the neutron activation analysis (NAA) technique and production of radioisotopes with medium or short half lives during the reactor daily operating time. (author)

  14. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons

    International Nuclear Information System (INIS)

    Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil

    2013-01-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV

  15. Post-quantum attacks on key distribution schemes in the presence of weakly stochastic sources

    International Nuclear Information System (INIS)

    Al–Safi, S W; Wilmott, C M

    2015-01-01

    It has been established that the security of quantum key distribution protocols can be severely compromised were one to permit an eavesdropper to possess a very limited knowledge of the random sources used between the communicating parties. While such knowledge should always be expected in realistic experimental conditions, the result itself opened a new line of research to fully account for real-world weak randomness threats to quantum cryptography. Here we expand of this novel idea by describing a key distribution scheme that is provably secure against general attacks by a post-quantum adversary. We then discuss possible security consequences for such schemes under the assumption of weak randomness. (paper)

  16. Study and Analysis of an Intelligent Microgrid Energy Management Solution with Distributed Energy Sources

    Directory of Open Access Journals (Sweden)

    Swaminathan Ganesan

    2017-09-01

    Full Text Available In this paper, a robust energy management solution which will facilitate the optimum and economic control of energy flows throughout a microgrid network is proposed. The increased penetration of renewable energy sources is highly intermittent in nature; the proposed solution demonstrates highly efficient energy management. This study enables precise management of power flows by forecasting of renewable energy generation, estimating the availability of energy at storage batteries, and invoking the appropriate mode of operation, based on the load demand to achieve efficient and economic operation. The predefined mode of operation is derived out of an expert rule set and schedules the load and distributed energy sources along with utility grid.

  17. Calculating method for confinement time and charge distribution of ions in electron cyclotron resonance sources

    International Nuclear Information System (INIS)

    Dougar-Jabon, V.D.; Umnov, A.M.; Kutner, V.B.

    1996-01-01

    It is common knowledge that the electrostatic pit in a core plasma of electron cyclotron resonance sources exerts strict control over generation of ions in high charge states. This work is aimed at finding a dependence of the lifetime of ions on their charge states in the core region and to elaborate a numerical model of ion charge dispersion not only for the core plasmas but for extracted beams as well. The calculated data are in good agreement with the experimental results on charge distributions and magnitudes for currents of beams extracted from the 14 GHz DECRIS source. copyright 1996 American Institute of Physics

  18. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  19. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Science.gov (United States)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit

  20. The P1-approximation for the Distribution of Neutrons from a Pulsed Source in Hydrogen

    International Nuclear Information System (INIS)

    Claesson, A.

    1963-12-01

    The asymptotic distribution of neutrons from a pulsed, high energy source in an infinite moderator has been obtained earlier in a 'diffusion' approximation. In that paper the cross section was assumed to be constant over the whole energy region and the time derivative of the first moment was disregarded. Here, first, an analytic expression is obtained for the density in a P 1 -approximation. However, the result is very complicated, and it is shown that an asymptotic solution can be found in a simpler way. By taking into account the low hydrogen scattering cross section at the source energy it follows that the space dependence of the distribution is less than that obtained earlier. The importance of keeping the time derivative of the first moment is further shown in a perturbation approximation