WorldWideScience

Sample records for source coding dsc

  1. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  2. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  3. The development of the code package PERMAK--3D//SC--1

    International Nuclear Information System (INIS)

    Bolobov, P. A.; Oleksuk, D. A.

    2011-01-01

    Code package PERMAK-3D//SC-1 was developed for performing pin-by-pin coupled neutronic and thermal hydraulic calculation of the core fragment of seven fuel assemblies and was designed on the basis of 3D multigroup pin-by-pin code PERMAK-3D and 3D (subchannel) thermal hydraulic code SC-1 The code package predicts axial and radial pin-by-pin power distribution and coolant parameters in stimulated region (enthalpies,, velocities,, void fractions,, boiling and DNBR margins).. The report describes some new steps in code package development. Some PERMAK-3D//SC-1 outcomes of WWER calculations are presented in the report. (Authors)

  4. Development and validation of an open source quantification tool for DSC-MRI studies.

    Science.gov (United States)

    Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J

    2015-03-01

    This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  6. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  7. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  8. A New Energy-Efficient Data Transmission Scheme Based on DSC and Virtual MIMO for Wireless Sensor Network

    OpenAIRE

    Li, Na; Zhang, Liwen; Li, Bing

    2015-01-01

    Energy efficiency in wireless sensor network (WSN) is one of the primary performance parameters. For improving the energy efficiency of WSN, we introduce distributed source coding (DSC) and virtual multiple-input multiple-output (MIMO) into wireless sensor network and then propose a new data transmission scheme called DSC-MIMO. DSC-MIMO compresses the source data using distributed source coding before transmitting, which is different from the existing communication schemes. Data compression c...

  9. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  10. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  11. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  12. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  13. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  14. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  15. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  16. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  17. Code Forking, Governance, and Sustainability in Open Source Software

    OpenAIRE

    Juho Lindman; Linus Nyman

    2013-01-01

    The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...

  18. Oil Analysis by Fast DSC

    NARCIS (Netherlands)

    Wetten, I.A.; Herwaarden, A.W.; Splinter, R.; Ruth, van S.M.

    2014-01-01

    Thermal analysis of Olive and Sunflower Oil is done by Fast DSC to evaluate its potential to replace DSC for adulteration detection. DSC measurements take hours, Fast DSC minutes. Peak temperatures of the crystallisation peak in cooling for different Olive and Sunflower Oils are both comparable to

  19. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  20. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  1. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  2. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  3. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  4. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  5. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  6. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  7. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  8. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  9. The Astrophysics Source Code Library by the numbers

    Science.gov (United States)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  10. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  11. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  12. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  13. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  14. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  15. Forensic characterization of HDPE pipes by DSC.

    Science.gov (United States)

    Sajwan, Madhuri; Aggarwal, Saroj; Singh, R B

    2008-03-05

    The melting behavior of 28 high density polyethylene (HDPE) pipe samples manufactured and supplied by 13 different manufacturers in India was examined by 'differential scanning calorimetry (DSC)' to find out if this parameter could be used in differentiating between these HDPE pipe samples which are chemically the same and being manufactured by different manufacturer. The results indicate that the melting temperature may serve as the useful criteria for differentiating HDPE (i) pipe samples from different sources and (ii) samples of different diameter from the same source.

  16. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  17. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  18. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  19. Automating RPM Creation from a Source Code Repository

    Science.gov (United States)

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  20. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  1. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.

  2. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Science.gov (United States)

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  3. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  4. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  5. The Astrophysics Source Code Library: Supporting software publication and citation

    Science.gov (United States)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  6. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  7. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  8. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  9. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  10. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  11. Health physics source document for codes of practice

    International Nuclear Information System (INIS)

    Pearson, G.W.; Meggitt, G.C.

    1989-05-01

    Personnel preparing codes of practice often require basic Health Physics information or advice relating to radiological protection problems and this document is written primarily to supply such information. Certain technical terms used in the text are explained in the extensive glossary. Due to the pace of change in the field of radiological protection it is difficult to produce an up-to-date document. This document was compiled during 1988 however, and therefore contains the principle changes brought about by the introduction of the Ionising Radiations Regulations (1985). The paper covers the nature of ionising radiation, its biological effects and the principles of control. It is hoped that the document will provide a useful source of information for both codes of practice and wider areas and stimulate readers to study radiological protection issues in greater depth. (author)

  12. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  13. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  14. COMPASS: A source term code for investigating capillary barrier performance

    International Nuclear Information System (INIS)

    Zhou, Wei; Apted, J.J.

    1996-01-01

    A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock

  15. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  16. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  17. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  18. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  19. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  20. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  1. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  2. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and

  3. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Science.gov (United States)

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  4. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  5. Stars with shell energy sources. Part 1. Special evolutionary code

    International Nuclear Information System (INIS)

    Rozyczka, M.

    1977-01-01

    A new version of the Henyey-type stellar evolution code is described and tested. It is shown, as a by-product of the tests, that the thermal time scale of the core of a red giant approaching the helium flash is of the order of the evolutionary time scale. The code itself appears to be a very efficient tool for investigations of the helium flash, carbon flash and the evolution of a white dwarf accreting mass. (author)

  6. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  7. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been

  8. Open Genetic Code: on open source in the life sciences

    OpenAIRE

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first ...

  9. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    Science.gov (United States)

    2010-12-01

    technical competence for the type of tests and calibrations SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 ...and exec t [ ISO / IEC 2005]. f a software system indicates that the SCALe analysis di by a CERT secure coding standard. Successful conforma antees that...to be more secure than non- systems. However, no study has yet been performed to p t ssment in accordance with ISO / IEC 17000: “a demonstr g to a

  10. Open Genetic Code : On open source in the life sciences

    NARCIS (Netherlands)

    Deibel, E.

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life

  11. Open Genetic Code: on open source in the life sciences.

    Science.gov (United States)

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  12. Differential scanning calorimetry (DSC) of semicrystalline polymers.

    Science.gov (United States)

    Schick, C

    2009-11-01

    Differential scanning calorimetry (DSC) is an effective analytical tool to characterize the physical properties of a polymer. DSC enables determination of melting, crystallization, and mesomorphic transition temperatures, and the corresponding enthalpy and entropy changes, and characterization of glass transition and other effects that show either changes in heat capacity or a latent heat. Calorimetry takes a special place among other methods. In addition to its simplicity and universality, the energy characteristics (heat capacity C(P) and its integral over temperature T--enthalpy H), measured via calorimetry, have a clear physical meaning even though sometimes interpretation may be difficult. With introduction of differential scanning calorimeters (DSC) in the early 1960s calorimetry became a standard tool in polymer science. The advantage of DSC compared with other calorimetric techniques lies in the broad dynamic range regarding heating and cooling rates, including isothermal and temperature-modulated operation. Today 12 orders of magnitude in scanning rate can be covered by combining different types of DSCs. Rates as low as 1 microK s(-1) are possible and at the other extreme heating and cooling at 1 MK s(-1) and higher is possible. The broad dynamic range is especially of interest for semicrystalline polymers because they are commonly far from equilibrium and phase transitions are strongly time (rate) dependent. Nevertheless, there are still several unsolved problems regarding calorimetry of polymers. I try to address a few of these, for example determination of baseline heat capacity, which is related to the problem of crystallinity determination by DSC, or the occurrence of multiple melting peaks. Possible solutions by using advanced calorimetric techniques, for example fast scanning and high frequency AC (temperature-modulated) calorimetry are discussed.

  13. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  14. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  15. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  16. Building guide : how to build Xyce from source code.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric Richard; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Mei, Ting; Verley, Jason C.

    2013-08-01

    While Xyce uses the Autoconf and Automake system to configure builds, it is often necessary to perform more than the customary %E2%80%9C./configure%E2%80%9D builds many open source users have come to expect. This document describes the steps needed to get Xyce built on a number of common platforms.

  17. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan

    2014-01-01

    We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...

  18. IMPLICATIONS OF GLOBAL AND LOCAL MOBILITY IN AMORPHOUS EXCIPIENTS AS DETERMINED BY DSC AND TM DSC

    OpenAIRE

    Ion Dranca; Tudor Lupascu

    2009-01-01

    The paper explores the use of differential scanning calorimetry (DSC) and temperature modulated differential scanning calorimetry (TM DSC) to study α- and β- processes in amorphous sucrose and trehalose. The real part of the complex heat capacity is evaluated at the frequencies, f, from 5 to 20mHz. β-relaxations were studied by annealing glassy samples at different temperatures and subsequently heating at different rates in a differential scanning calorimeter.

  19. IMPLICATIONS OF GLOBAL AND LOCAL MOBILITY IN AMORPHOUS EXCIPIENTS AS DETERMINED BY DSC AND TM DSC

    Directory of Open Access Journals (Sweden)

    Ion Dranca

    2009-12-01

    Full Text Available The paper explores the use of differential scanning calorimetry (DSC and temperature modulated differential scanning calorimetry (TM DSC to study α- and β- processes in amorphous sucrose and trehalose. The real part of the complex heat capacity is evaluated at the frequencies, f, from 5 to 20mHz. β-relaxations were studied by annealing glassy samples at different temperatures and subsequently heating at different rates in a differential scanning calorimeter.

  20. Measurement for commercial exposives with SC-DSC test. Sangyoyo bakuhayaku no SC-DSC sokutei

    Energy Technology Data Exchange (ETDEWEB)

    Yabashi, H.; Wada, Y.; Hwang, D.; Akutsu, Y.; Tamura, M.; Yoshida, T. (The University of Tokyo, Tokyo (Japan). Faculty of Engineering); Matsuzawa, T. (Nippon Kayaku Co. Ltd., Tokyo (Japan))

    1991-08-30

    The sealed cell differential scanning calorimetry (SC-DSC) was sintroduced of commercial blasting explosives. As a series of testing the commercial blasting explosives in performance, an SC-DSC test was made to compare the critical detonability line with that resulting therefrom. From the result of SC-DSC measurement, the critical dilution rate was estimated of commercial blasting explosives to become without detonating propagation. As a result, all the explosives with exception of ANFO one were assumed to have a possibility of detonating propagation so that the ANFO explosive was known to be material, unable to exactly evaluate the detonability by the SC-DSC test. The explosion heat, then calculated by the REITP2 in order to assume how the reaction proceeded in the DSC cell, was compared with the reaction heat measured by the SC-DSC test. As a result, the calculated value was known to be almost equal to or slightly larger than the measured one. 15 refs., 4 figs., 2 tabs.

  1. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.

  2. Automated Source Code Analysis to Identify and Remove Software Security Vulnerabilities: Case Studies on Java Programs

    OpenAIRE

    Natarajan Meghanathan

    2013-01-01

    The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and ...

  3. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost

  4. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  5. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  6. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  7. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  8. Use of source term code package in the ELEBRA MX-850 system

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-12-01

    The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt

  9. Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry

    DEFF Research Database (Denmark)

    Stickland, Matt; Scanlon, Tom; Fabre, Sylvie

    2009-01-01

    Part of the overall NORSEWInD project is the use of LiDAR remote sensing (RS) systems mounted on offshore platforms to measure wind velocity profiles at a number of locations offshore. The data acquired from the offshore RS measurements will be fed into a large and novel wind speed dataset suitab...... between the results of simulations created by the commercial code FLUENT and the open source code OpenFOAM. An assessment of the ease with which the open source code can be used is also included....

  10. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  11. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  12. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories

  13. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.

  14. Lysimeter data as input to performance assessment source term codes

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.; Rogers, R.D.; Sullivan, T.

    1992-01-01

    The Field Lysimeter Investigation: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-II c prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. In this paper, radionuclide releases from waste forms in the first seven years of sampling are presented and discussed. Application of lysimeter data to be used in performance assessment source term models is presented. Initial results from use of data in two models are discussed

  15. SCATTER: Source and Transport of Emplaced Radionuclides: Code documentation

    International Nuclear Information System (INIS)

    Longsine, D.E.

    1987-03-01

    SCATTER simulated several processes leading to the release of radionuclides to the site subsystem and then simulates transport via the groundwater of the released radionuclides to the biosphere. The processes accounted for to quantify release rates to a ground-water migration path include radioactive decay and production, leaching, solubilities, and the mixing of particles with incoming uncontaminated fluid. Several decay chains of arbitrary length can be considered simultaneously. The release rates then serve as source rates to a numerical technique which solves convective-dispersive transport for each decay chain. The decay chains are allowed to have branches and each member can have a different radioactive factor. Results are cast as radionuclide discharge rates to the accessible environment

  16. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  17. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  18. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  19. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  20. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  1. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  2. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  3. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  4. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  5. Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources

    International Nuclear Information System (INIS)

    Wheatley, J. S.

    2004-01-01

    The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs

  6. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    International Nuclear Information System (INIS)

    Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon

    2006-01-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors

  7. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  8. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  9. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)

  10. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  11. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  12. Rascal: A domain specific language for source code analysis and manipulation

    NARCIS (Netherlands)

    P. Klint (Paul); T. van der Storm (Tijs); J.J. Vinju (Jurgen); A. Walenstein; S. Schuppe

    2009-01-01

    htmlabstractMany automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This

  13. RASCAL : a domain specific language for source code analysis and manipulationa

    NARCIS (Netherlands)

    Klint, P.; Storm, van der T.; Vinju, J.J.

    2009-01-01

    Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance

  14. From system requirements to source code: transitions in UML and RUP

    Directory of Open Access Journals (Sweden)

    Stanisław Wrycza

    2011-06-01

    Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.

  15. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  16. Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)

  17. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  18. Differential scanning calorimetry (DSC) and temperature-modulated DSC study of three mouthguard materials.

    Science.gov (United States)

    Meng, Frank H; Schricker, Scott R; Brantley, William A; Mendel, Deborah A; Rashid, Robert G; Fields, Henry W; Vig, Katherine W L; Alapati, Satish B

    2007-12-01

    Employ differential scanning calorimetry (DSC) and temperature-modulated DSC (TMDSC) to investigate thermal transformations in three mouthguard materials and provide insight into their previously investigated energy absorption. Samples (13-21mg) were obtained from (a) conventional ethylene vinyl acetate (EVA), (b) Pro-form, another EVA polymer, and (c) PolyShok, an EVA polymer containing polyurethane. Conventional DSC (n=5) was first performed from -80 to 150 degrees C at a heating rate of 10 degrees C/min to determine the temperature range for structural transformations. Subsequently, TMDSC (n=5) was performed from -20 to 150 degrees C at a heating rate of 1 degrees C/min. Onset and peak temperatures were compared using ANOVA and the Tukey-Kramer HSD test. Other samples were coated with a gold-palladium film and examined with an SEM. DSC and TMDSC curves were similar for both conventional EVA and Pro-form, showing two endothermic peaks suggestive of melting processes, with crystallization after the higher-temperature peak. Evidence for crystallization and the second endothermic peak were much less prominent for PolyShok, which had no peaks associated with the polyurethane constituent. The onset of the lower-temperature endothermic transformation is near body temperature. No glass transitions were observed in the materials. SEM examination revealed different surface morphology and possible cushioning effect for PolyShok, compared to Pro-form and EVA. The difference in thermal behavior for PolyShok is tentatively attributed to disruption of EVA crystal formation, which may contribute to its superior impact resistance. The lower-temperature endothermic peak suggests that impact testing of these materials should be performed at 37 degrees C.

  19. Uncertainties in source term calculations generated by the ORIGEN2 computer code for Hanford Production Reactors

    International Nuclear Information System (INIS)

    Heeb, C.M.

    1991-03-01

    The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs

  20. Code of practice for the use of sealed radioactive sources in borehole logging (1998)

    International Nuclear Information System (INIS)

    1989-12-01

    The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills

  1. Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma

    Science.gov (United States)

    Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.

    2017-10-01

    For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that

  2. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  3. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  4. SOURCES-3A: A code for calculating (α, n), spontaneous fission, and delayed neutron sources and spectra

    International Nuclear Information System (INIS)

    Perry, R.T.; Wilson, W.B.; Charlton, W.S.

    1998-04-01

    In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an

  5. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  6. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  7. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  8. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  9. DSC analysis of irradiated proteins from Crotalus durissus terrificus

    International Nuclear Information System (INIS)

    Oliveira, Karina Corleto de; Silva, Monica Nascimento da; Goncalves, Karina de Oliveira; Spencer, Patrick Jack; Nascimento, Nanci do

    2011-01-01

    Full text: Snake bites are a serious public health problem, especially in subtropical countries. In Brazil, the serum, the only effective treatment in case of snake bites, is produced in horses which, despite the large size, have a reduced lifespan due to the high toxicity of the antigen. It is known that ionizing radiation effects - direct and indirect - can modify the molecular structure, affecting the biological properties of proteins. Ionizing radiation has been employed to attenuate the toxicity of snake venoms, aiming to generate an improved antigen with low toxicity. Two proteins, purified from Crotalus durissus terrificus (Cdt) venom were tested in this work: crotoxin and crotamine. Crotoxin, the main toxic compound of Cdt venom, is a heterodimeric protein composed of two subunits: crotapotin and phospholipase A2. Crotamine is a highly basic polypeptide (pI - 10.3), with myotoxic activity and molecular weight of 4882 Da. It is composed of 42 amino acids residues and reticulated by three disulfide bonds. This study aimed to investigate the effects of radiation on crotoxin and crotamine using Differential Scanning Calorimetry (DSC). After isolation of the toxins by chromatographic techniques, they were irradiated with 2.0 kGy from 60 Co source. The thermodynamics analysis, carried out in a METTLER TOLEDO, DSC 822e calorimeter, showed that irradiation promoted changes of the calorimetric profile. These changes suggest that, although radiation induced structural modifications of the protein, denaturation was only partial, since transition states could still be detected, suggesting that some structural elements were still present after irradiation. Taken together, our data suggest that following irradiation, the molecules underwent conformational changes, and that the remaining structural elements displayed a lower enthalpy, clearly indicating that the previously described loss of toxicity of irradiated toxins can be mostly ascribed to structural changes

  10. DSC analysis of irradiated proteins from Crotalus durissus terrificus

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina Corleto de; Silva, Monica Nascimento da; Goncalves, Karina de Oliveira; Spencer, Patrick Jack; Nascimento, Nanci do [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    Full text: Snake bites are a serious public health problem, especially in subtropical countries. In Brazil, the serum, the only effective treatment in case of snake bites, is produced in horses which, despite the large size, have a reduced lifespan due to the high toxicity of the antigen. It is known that ionizing radiation effects - direct and indirect - can modify the molecular structure, affecting the biological properties of proteins. Ionizing radiation has been employed to attenuate the toxicity of snake venoms, aiming to generate an improved antigen with low toxicity. Two proteins, purified from Crotalus durissus terrificus (Cdt) venom were tested in this work: crotoxin and crotamine. Crotoxin, the main toxic compound of Cdt venom, is a heterodimeric protein composed of two subunits: crotapotin and phospholipase A2. Crotamine is a highly basic polypeptide (pI - 10.3), with myotoxic activity and molecular weight of 4882 Da. It is composed of 42 amino acids residues and reticulated by three disulfide bonds. This study aimed to investigate the effects of radiation on crotoxin and crotamine using Differential Scanning Calorimetry (DSC). After isolation of the toxins by chromatographic techniques, they were irradiated with 2.0 kGy from {sup 60}Co source. The thermodynamics analysis, carried out in a METTLER TOLEDO, DSC 822e calorimeter, showed that irradiation promoted changes of the calorimetric profile. These changes suggest that, although radiation induced structural modifications of the protein, denaturation was only partial, since transition states could still be detected, suggesting that some structural elements were still present after irradiation. Taken together, our data suggest that following irradiation, the molecules underwent conformational changes, and that the remaining structural elements displayed a lower enthalpy, clearly indicating that the previously described loss of toxicity of irradiated toxins can be mostly ascribed to structural changes

  11. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  12. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  13. Survey of source code metrics for evaluating testability of object oriented systems

    OpenAIRE

    Shaheen , Muhammad Rabee; Du Bousquet , Lydie

    2010-01-01

    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...

  14. NEACRP comparison of source term codes for the radiation protection assessment of transportation packages

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Locke, H.F.; Avery, A.F.

    1994-01-01

    The results for Problems 5 and 6 of the NEACRP code comparison as submitted by six participating countries are presented in summary. These problems concentrate on the prediction of the neutron and gamma-ray sources arising in fuel after a specified irradiation, the fuel being uranium oxide for problem 5 and a mixture of uranium and plutonium oxides for problem 6. In both problems the predicted neutron sources are in good agreement for all participants. For gamma rays, however, there are differences, largely due to the omission of bremsstrahlung in some calculations

  15. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  16. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  17. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  18. Non-isothermal dehydration kinetic study of aspartame hemihydrate using DSC, TGA and DSC-FTIR microspectroscopy

    Directory of Open Access Journals (Sweden)

    Wei-hsien Hsieh

    2018-05-01

    Full Text Available Three thermal analytical techniques such as differential scanning calorimetry (DSC, thermal gravimetric analysis (TGA using five heating rates, and DSC-Fourier Transform Infrared (DSC-FTIR microspectroscopy using one heating rate, were used to determine the thermal characteristics and the dehydration process of aspartame (APM hemihydrate in the solid state. The intramolecular cyclization process of APM anhydrate was also examined. One exothermic and four endothermic peaks were observed in the DSC thermogram of APM hemihydrate, in which the exothermic peak was due to the crystallization of some amorphous APM caused by dehydration process from hemihydrate to anhydride. While four endothermic peaks were corresponded to the evaporation of absorbed water, the dehydration of hemihydrate, the diketopiperazines (DKP formation via intramolecular cyclization, and the melting of DKP, respectively. The weight loss measured in TGA curve of APM hemihydrate was associated with these endothermic peaks in the DSC thermogram. According to the Flynn–Wall–Ozawa (FWO model, the activation energy of dehydration process within 100–150 °C was about 218 ± 11 kJ/mol determined by TGA technique. Both the dehydration and DKP formation processes for solid-state APM hemihydrate were markedly evidenced from the thermal-responsive changes in several specific FTIR bands by a single-step DSC-FTIR microspectroscopy. Keywords: Aspartame (APM hemihydrate, DSC/TGA, DSC-FTIR, Dehydration, Activation energy, DKP formation

  19. Application of the source term code package to obtain a specific source term for the Laguna Verde Nuclear Power Plant

    International Nuclear Information System (INIS)

    Souto, F.J.

    1991-06-01

    The main objective of the project was to use the Source Term Code Package (STCP) to obtain a specific source term for those accident sequences deemed dominant as a result of probabilistic safety analyses (PSA) for the Laguna Verde Nuclear Power Plant (CNLV). The following programme has been carried out to meet this objective: (a) implementation of the STCP, (b) acquisition of specific data for CNLV to execute the STCP, and (c) calculations of specific source terms for accident sequences at CNLV. The STCP has been implemented and validated on CDC 170/815 and CDC 180/860 main frames as well as on a Micro VAX 3800 system. In order to get a plant-specific source term, data on the CNLV including initial core inventory, burn-up, primary containment structures, and materials used for the calculations have been obtained. Because STCP does not explicitly model containment failure, dry well failure in the form of a catastrophic rupture has been assumed. One of the most significant sequences from the point of view of possible off-site risk is the loss of off-site power with failure of the diesel generators and simultaneous loss of high pressure core spray and reactor core isolation cooling systems. The probability for that event is approximately 4.5 x 10 -6 . This sequence has been analysed in detail and the release fractions of radioisotope groups are given in the full report. 18 refs, 4 figs, 3 tabs

  20. The European source term code ESTER - basic ideas and tools for coupling of ATHLET and ESTER

    International Nuclear Information System (INIS)

    Schmidt, F.; Schuch, A.; Hinkelmann, M.

    1993-04-01

    The French software house CISI and IKE of the University of Stuttgart have developed during 1990 and 1991 in the frame of the Shared Cost Action Reactor Safety the informatic structure of the European Source TERm Evaluation System (ESTER). Due to this work tools became available which allow to unify on an European basis both code development and code application in the area of severe core accident research. The behaviour of reactor cores is determined by thermal hydraulic conditions. Therefore for the development of ESTER it was important to investigate how to integrate thermal hydraulic code systems with ESTER applications. This report describes the basic ideas of ESTER and improvements of ESTER tools in view of a possible coupling of the thermal hydraulic code system ATHLET and ESTER. Due to the work performed during this project the ESTER tools became the most modern informatic tools presently available in the area of severe accident research. A sample application is given which demonstrates the use of the new tools. (orig.) [de

  1. GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit

    International Nuclear Information System (INIS)

    Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik

    2014-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)

  2. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  3. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  4. Chronos sickness: digital reality in Duncan Jones’s Source Code

    Directory of Open Access Journals (Sweden)

    Marcia Tiemy Morita Kawamoto

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.

  5. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  6. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  7. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  8. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  9. Coded aperture detector for high precision gamma-ray burst source locations

    International Nuclear Information System (INIS)

    Helmken, H.; Gorenstein, P.

    1977-01-01

    Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented

  10. A Research Agenda on Data Supply Chains (DSC)

    OpenAIRE

    Spanaki, K; Adams, R; Mulligan, C; Lupu, E

    2016-01-01

    Competition among organizations supports initiatives and collaborative use of data while creating value based on the strategy and best performance of each data supply chain. Supporting this direction, and building on the theoretical background of the supply chain, we propose the Data Supply Chain (DSC) as a novel concept to aid investigations for data-driven collaboration impacting organizational performance. In this study we initially propose a definition for the DSC paying particular attent...

  11. PRIMUS: a computer code for the preparation of radionuclide ingrowth matrices from user-specified sources

    International Nuclear Information System (INIS)

    Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.

    1984-10-01

    The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references

  12. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    Science.gov (United States)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  13. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  14. Code of practice for the control and safe handling of radioactive sources used for therapeutic purposes (1988)

    International Nuclear Information System (INIS)

    1988-01-01

    This Code is intended as a guide to safe practices in the use of sealed and unsealed radioactive sources and in the management of patients being treated with them. It covers the procedures for the handling, preparation and use of radioactive sources, precautions to be taken for patients undergoing treatment, storage and transport of radioactive sources within a hospital or clinic, and routine testing of sealed sources [fr

  15. A Source Term Calculation for the APR1400 NSSS Auxiliary System Components Using the Modified SHIELD Code

    International Nuclear Information System (INIS)

    Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee

    2005-01-01

    The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components

  16. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  17. Living Up to the Code's Exhortations? Social Workers' Political Knowledge Sources, Expectations, and Behaviors.

    Science.gov (United States)

    Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan

    2016-01-01

    The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed.

  18. Effect of milling on DSC thermogram of excipient adipic acid.

    Science.gov (United States)

    Ng, Wai Kiong; Kwek, Jin Wang; Yuen, Aaron; Tan, Chin Lee; Tan, Reginald

    2010-03-01

    The purpose of this research was to investigate why and how mechanical milling results in an unexpected shift in differential scanning calorimetry (DSC) measured fusion enthalpy (Delta(fus)H) and melting point (T(m)) of adipic acid, a pharmaceutical excipient. Hyper differential scanning calorimetry (hyper-DSC) was used to characterize adipic acid before and after ball-milling. An experimental study was conducted to evaluate previous postulations such as electrostatic charging using the Faraday cage method, crystallinity loss using powder X-ray diffraction (PXRD), thermal annealing using DSC, impurities removal using thermal gravimetric analysis (TGA) and Karl Fischer titration. DSC thermograms showed that after milling, the values of Delta(fus)H and T(m) were increased by approximately 9% and 5 K, respectively. Previous suggestions of increased electrostatic attraction, change in particle size distribution, and thermal annealing during measurements did not explain the differences. Instead, theoretical analysis and experimental findings suggested that the residual solvent (water) plays a key role. Water entrapped as inclusions inside adipic acid during solution crystallization was partially evaporated by localized heating at the cleaved surfaces during milling. The correlation between the removal of water and melting properties measured was shown via drying and crystallization experiments. These findings show that milling can reduce residual solvent content and causes a shift in DSC results.

  19. RIES - Rijnland Internet Election System: A Cursory Study of Published Source Code

    Science.gov (United States)

    Gonggrijp, Rop; Hengeveld, Willem-Jan; Hotting, Eelco; Schmidt, Sebastian; Weidemann, Frederik

    The Rijnland Internet Election System (RIES) is a system designed for voting in public elections over the internet. A rather cursory scan of the source code to RIES showed a significant lack of security-awareness among the programmers which - among other things - appears to have left RIES vulnerable to near-trivial attacks. If it had not been for independent studies finding problems, RIES would have been used in the 2008 Water Board elections, possibly handling a million votes or more. While RIES was more extensively studied to find cryptographic shortcomings, our work shows that more down-to-earth secure design practices can be at least as important, and the aspects need to be examined much sooner than right before an election.

  20. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    Science.gov (United States)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  1. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  2. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  3. Calculation Of Fuel Burnup And Radionuclide Inventory In The Syrian Miniature Neutron Source Reactor Using The GETERA Code

    International Nuclear Information System (INIS)

    Khattab, K.; Dawahra, S.

    2011-01-01

    Calculations of the fuel burnup and radionuclide inventory in the Syrian Miniature Neutron Source Reactor (MNSR) after 10 years (the reactor core expected life) of the reactor operation time are presented in this paper using the GETERA code. The code is used to calculate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnup and plutonium produced in the reactor core, the concentrations and radionuclides of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core were calculated using the GETERA code as well. It is found that the GETERA code is better than the WIMSD4 code for the fuel burnup calculation in the MNSR reactor since it is newer and has a bigger library of isotopes and more accurate. (author)

  4. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  5. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  6. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    Science.gov (United States)

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  7. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  8. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system.

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  9. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  10. Gaze strategies can reveal the impact of source code features on the cognitive load of novice programmers

    DEFF Research Database (Denmark)

    Wulff-Jensen, Andreas; Ruder, Kevin Vignola; Triantafyllou, Evangelia

    2018-01-01

    As shown by several studies, programmers’ readability of source code is influenced by its structural and the textual features. In order to assess the importance of these features, we conducted an eye-tracking experiment with programming students. To assess the readability and comprehensibility of...

  11. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  12. A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry

    Science.gov (United States)

    Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb

    2013-01-01

    A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…

  13. VULCAN: An Open-source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Shang-Min; Grosheintz, Luc; Kitzmann, Daniel; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Lyons, James R. [Arizona State University, School of Earth and Space Exploration, Bateman Physical Sciences, Tempe, AZ 85287-1404 (United States); Rimmer, Paul B., E-mail: shang-min.tsai@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch, E-mail: jimlyons@asu.edu [University of St. Andrews, School of Physics and Astronomy, St. Andrews, KY16 9SS (United Kingdom)

    2017-02-01

    We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K, using a reduced C–H–O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer and Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. We also use VULCAN to examine the theoretical trends produced when the temperature–pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching approximation and find that it is accurate for methane but breaks down for acetylene, because the disequilibrium abundance of acetylene is not directly determined by transport-induced quenching, but is rather indirectly controlled by the disequilibrium abundance of methane. Therefore we suggest that the quenching approximation should be used with caution and must always be checked against a chemical kinetics calculation. A one-dimensional model atmosphere with 100 layers, computed using VULCAN, typically takes several minutes to complete. VULCAN is part of the Exoclimes Simulation Platform (ESP; exoclime.net) and publicly available at https://github.com/exoclime/VULCAN.

  14. Code of Conduct on the Safety and Security of Radioactive Sources and the Supplementary Guidance on the Import and Export of Radioactive Sources

    International Nuclear Information System (INIS)

    2005-01-01

    In operative paragraph 4 of its resolution GC(47)/RES/7.B, the General Conference, having welcomed the approval by the Board of Governors of the revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources (GC(47)/9), and while recognizing that the Code is not a legally binding instrument, urged each State to write to the Director General that it fully supports and endorses the IAEA's efforts to enhance the safety and security of radioactive sources and is working toward following the guidance contained in the IAEA Code of Conduct. In operative paragraph 5, the Director General was requested to compile, maintain and publish a list of States that have made such a political commitment. The General Conference, in operative paragraph 6, recognized that this procedure 'is an exceptional one, having no legal force and only intended for information, and therefore does not constitute a precedent applicable to other Codes of Conduct of the Agency or of other bodies belonging to the United Nations system'. In operative paragraph 7 of resolution GC(48)/RES/10.D, the General Conference welcomed the fact that more than 60 States had made political commitments with respect to the Code in line with resolution GC(47)/RES/7.B and encouraged other States to do so. In operative paragraph 8 of resolution GC(48)/RES/10.D, the General Conference further welcomed the approval by the Board of Governors of the Supplementary Guidance on the Import and Export of Radioactive Sources (GC(48)/13), endorsed this Guidance while recognizing that it is not legally binding, noted that more than 30 countries had made clear their intention to work towards effective import and export controls by 31 December 2005, and encouraged States to act in accordance with the Guidance on a harmonized basis and to notify the Director General of their intention to do so as supplementary information to the Code of Conduct, recalling operative paragraph 6 of resolution GC(47)/RES/7.B. 4. The

  15. TOPEM DSC study of glass transition region of polyurethane cationomers

    International Nuclear Information System (INIS)

    Pielichowska, Kinga; Król, Piotr; Król, Bożena; Pagacz, Joanna

    2012-01-01

    Highlights: ► TOPEM DSC method was employed to investigate the glass transition (T g ) region of fluorinated polyurethane cationomers. ► Introduction of fluorine compounds significantly changes thermal behaviour of cationomers in the T g region of hard segments. ► Introduction of fluorine compound leads to changes of the slope in activation diagram of glass transition. - Abstract: In this paper TOPEM DSC method was employed to investigate the glass transition region of fluorinated polyurethane cationomers. Fluorinated polyurethane cationomers have been synthesised in the reaction of MDI with poly(ethylene glycol) (600) and butane1,4-diol or N-methyl- or N-butyldiethanolamine and 2,2,3,3-tetrafluoro-1,4-butanediol. Better rigidity was found for generally amorphous cationomer coats. It was found that introduction of fluorine compound changes thermal behaviour of polyurethane cationomers as well as leads to changes in the slope in activation diagram profiles of glass transition in comparison to polyuretahene cationomer without fluorine compound. Application of TOPEM DSC allows to obtain more information concerning frequency dependence of glass transition region and thermodynamical stability of polyurethane structures.

  16. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    International Nuclear Information System (INIS)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C

  17. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  18. Analysis of source term aspects in the experiment Phebus FPT1 with the MELCOR and CFX codes

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Fuertes, F. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)]. E-mail: francisco.martinfuertes@upm.es; Barbero, R. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Martin-Valdepenas, J.M. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Jimenez, M.A. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2007-03-15

    Several aspects related to the source term in the Phebus FPT1 experiment have been analyzed with the help of MELCOR 1.8.5 and CFX 5.7 codes. Integral aspects covering circuit thermalhydraulics, fission product and structural material release, vapours and aerosol retention in the circuit and containment were studied with MELCOR, and the strong and weak points after comparison to experimental results are stated. Then, sensitivity calculations dealing with chemical speciation upon release, vertical line aerosol deposition and steam generator aerosol deposition were performed. Finally, detailed calculations concerning aerosol deposition in the steam generator tube are presented. They were obtained by means of an in-house code application, named COCOA, as well as with CFX computational fluid dynamics code, in which several models for aerosol deposition were implemented and tested, while the models themselves are discussed.

  19. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  20. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  1. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources

    International Nuclear Information System (INIS)

    Gaufridy de Dortan, F. de

    2006-01-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  2. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  3. Recycling source terms for edge plasma fluid models and impact on convergence behaviour of the BRAAMS 'B2' code

    International Nuclear Information System (INIS)

    Maddison, G.P.; Reiter, D.

    1994-02-01

    Predictive simulations of tokamak edge plasmas require the most authentic description of neutral particle recycling sources, not merely the most expedient numerically. Employing a prototypical ITER divertor arrangement under conditions of high recycling, trial calculations with the 'B2' steady-state edge plasma transport code, plus varying approximations or recycling, reveal marked sensitivity of both results and its convergence behaviour to details of sources incorporated. Comprehensive EIRENE Monte Carlo resolution of recycling is implemented by full and so-called 'shot' intermediate cycles between the plasma fluid and statistical neutral particle models. As generally for coupled differencing and stochastic procedures, though, overall convergence properties become more difficult to assess. A pragmatic criterion for the 'B2'/EIRENE code system is proposed to determine its success, proceeding from a stricter condition previously identified for one particular analytic approximation of recycling in 'B2'. Certain procedures are also inferred potentially to improve their convergence further. (orig.)

  4. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  5. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  6. Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)

  7. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  8. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  9. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  10. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  11. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    Science.gov (United States)

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  12. Thermal behavior of biflorin by beans TG and a DSC photovisual system

    Directory of Open Access Journals (Sweden)

    C. F. S. Aragão

    Full Text Available This work proposes thermal characterization, of the biflorine, orto-quinon of Capraria biflora L., through the TG and DSC photovisual data. The thermogravimetric results showed that the decomposition reaction biflorine occurs three steps under air atmosphere, The DSC of biflorin presented five peaks relating to phase transitions. The DSC photovisual system demonstrated changes in biflorin.

  13. Large-eddy simulation of convective boundary layer generated by highly heated source with open source code, OpenFOAM

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Eguchi, Yuzuru; Sano, Tadashi; Shirai, Koji; Ishihara, Shuji

    2011-01-01

    Spatial- and temporal-characteristics of turbulence structures in the close vicinity of a heat source, which is a horizontal upward-facing round plate heated at high temperature, are examined by using well resolved large-eddy simulations. The verification is carried out through the comparison with experiments: the predicted statistics, including the PDF distribution of temperature fluctuations, agree well with measurements, indicating that the present simulations have a capability to appropriately reproduce turbulence structures near the heat source. The reproduced three-dimensional thermal- and fluid-fields in the close vicinity of the heat source reveals developing processes of coherence structures along the surface: the stationary- and streaky-flow patterns appear near the edge, and such patterns randomly shift to cell-like patterns with incursion into the center region, resulting in thermal-plume meandering. Both the patterns have very thin structures, but the depth of streaky structure is considerably small compared with that of cell-like patterns; this discrepancy causes the layered structures. The structure is the source of peculiar turbulence characteristics, the prediction of which is quite difficult with RANS-type turbulence models. The understanding such structures obtained in present study must be helpful to improve the turbulence model used in nuclear engineering. (author)

  14. Limiting precision in differential equation solvers. II Sources of trouble and starting a code

    International Nuclear Information System (INIS)

    Shampine, L.F.

    1978-01-01

    The reasons a class of codes for solving ordinary differential equations might want to use an extremely small step size are investigated. For this class the likelihood of precision difficulties is evaluated and remedies examined. The investigations suggests a way of selecting automatically an initial step size which should be reliably on scale

  15. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  16. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-03-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimised to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the dataset. As a test case we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model we find that the best fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  17. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-06-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  18. Reticulation of Aqueous Polyurethane Systems Controlled by DSC Method

    Directory of Open Access Journals (Sweden)

    Jakov Stamenkovic

    2006-06-01

    Full Text Available The DSC method has been employed to monitor the kinetics of reticulation ofaqueous polyurethane systems without catalysts, and with the commercial catalyst of zirconium(CAT®XC-6212 and the highly selective manganese catalyst, the complex Mn(III-diacetylacetonemaleinate (MAM. Among the polyol components, the acrylic emulsions wereused for reticulation in this research, and as suitable reticulation agents the water emulsiblealiphatic polyisocyanates based on hexamethylendoisocyanate with the different contents ofNCO-groups were employed. On the basis of DSC analysis, applying the methods of Kissinger,Freeman-Carroll and Crane-Ellerstein the pseudo kinetic parameters of the reticulation reactionof aqueous systems were determined. The temperature of the examination ranged from 50oC to450oC with the heat rate of 0.5oC/min. The reduction of the activation energy and the increaseof the standard deviation indicate the catalytic action of the selective catalysts of zirconium andmanganese. The impact of the catalysts on the reduction of the activation energy is thestrongest when using the catalysts of manganese and applying all the three afore-said methods.The least aberrations among the stated methods in defining the kinetic parameters wereobtained by using the manganese catalyst.

  19. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  20. Pre-Test Analysis of the MEGAPIE Spallation Source Target Cooling Loop Using the TRAC/AAA Code

    International Nuclear Information System (INIS)

    Bubelis, Evaldas; Coddington, Paul; Leung, Waihung

    2006-01-01

    A pilot project is being undertaken at the Paul Scherrer Institute in Switzerland to test the feasibility of installing a Lead-Bismuth Eutectic (LBE) spallation target in the SINQ facility. Efforts are coordinated under the MEGAPIE project, the main objectives of which are to design, build, operate and decommission a 1 MW spallation neutron source. The technology and experience of building and operating a high power spallation target are of general interest in the design of an Accelerator Driven System (ADS) and in this context MEGAPIE is one of the key experiments. The target cooling is one of the important aspects of the target system design that needs to be studied in detail. Calculations were performed previously using the RELAP5/Mod 3.2.2 and ATHLET codes, but in order to verify the previous code results and to provide another capability to model LBE systems, a similar study of the MEGAPIE target cooling system has been conducted with the TRAC/AAA code. In this paper a comparison is presented for the steady-state results obtained using the above codes. Analysis of transients, such as unregulated cooling of the target, loss of heat sink, the main electro-magnetic pump trip of the LBE loop and unprotected proton beam trip, were studied with TRAC/AAA and compared to those obtained earlier using RELAP5/Mod 3.2.2. This work extends the existing validation data-base of TRAC/AAA to heavy liquid metal systems and comprises the first part of the TRAC/AAA code validation study for LBE systems based on data from the MEGAPIE test facility and corresponding inter-code comparisons. (authors)

  1. Radiation Shielding Information Center: a source of computer codes and data for fusion neutronics studies

    International Nuclear Information System (INIS)

    McGill, B.L.; Roussin, R.W.; Trubey, D.K.; Maskewitz, B.F.

    1980-01-01

    The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  2. kspectrum: an open-source code for high-resolution molecular absorption spectra production

    International Nuclear Information System (INIS)

    Eymet, V.; Coustet, C.; Piaud, B.

    2016-01-01

    We present the kspectrum, scientific code that produces high-resolution synthetic absorption spectra from public molecular transition parameters databases. This code was originally required by the atmospheric and astrophysics communities, and its evolution is now driven by new scientific projects among the user community. Since it was designed without any optimization that would be specific to any particular application field, its use could also be extended to other domains. kspectrum produces spectral data that can subsequently be used either for high-resolution radiative transfer simulations, or for producing statistic spectral model parameters using additional tools. This is a open project that aims at providing an up-to-date tool that takes advantage of modern computational hardware and recent parallelization libraries. It is currently provided by Méso-Star (http://www.meso-star.com) under the CeCILL license, and benefits from regular updates and improvements. (paper)

  3. Overriding "doing wrong" and "not doing right": validation of the Dispositional Self-Control Scale (DSC).

    Science.gov (United States)

    Ein-Gar, Danit; Sagiv, Lilach

    2014-01-01

    We present the Dispositional Self-Control (DSC) Scale, which reflects individuals' tendency to override 2 types of temptations, termed doing wrong and not doing right. We report a series of 5 studies designed to test the reliability and validity of the scale. As hypothesized, high DSC predicts distant future orientation and low DSC predicts deviant behaviors such as aggression, alcohol misuse, and aberrant driving. DSC also predicts task performance among resource-depleted participants. Taken together, these findings suggest that the DSC Scale could be a useful tool toward further understanding the role of personality in overcoming self-control challenges.

  4. Four energy group neutron flux distribution in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION code

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2009-01-01

    A 3-D (R, θ , Z) neutronic model for the Miniature Neutron Source Reactor (MNSR) was developed earlier to conduct the reactor neutronic analysis. The group constants for all the reactor components were generated using the WIMSD4 code. The reactor excess reactivity and the four group neutron flux distributions were calculated using the CITATION code. This model is used in this paper to calculate the point wise four energy group neutron flux distributions in the MNSR versus the radius, angle and reactor axial directions. Good agreement is noticed between the measured and the calculated thermal neutron flux in the inner and the outer irradiation site with relative difference less than 7% and 5% respectively. (author)

  5. A DSC analysis of inverse salt-pair explosive composition

    Energy Technology Data Exchange (ETDEWEB)

    Babu, E. Suresh; Kaur, Sukhminder [Central Forensic Science Laboratory, Explosives Division, Ramanthapur, Hyderabad 500013 (India)

    2004-02-01

    Alkali nitrates are used as an ingredient in low explosive compositions and pyrotechnics. It has been suggested that alkali nitrates can form inverse salt-pair explosives with the addition of ammonium chloride. Therefore, the thermal behavior of low explosive compositions containing potassium nitrate mixed with ammonium chloride has been studied using Differential Scanning Calorimetry (DSC). Results provide information about the ion exchange reaction between these two chemical substances and the temperature region at which the formation of a cloud of salt particles of potassium chloride takes place. Furthermore, the addition of ammonium chloride quenches the flame of deflagrating compositions and causes the mixture to undergo explosive decomposition at relatively low temperatures. (Abstract Copyright [2004], Wiley Periodicals, Inc.)

  6. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  7. Developing open-source codes for electromagnetic geophysics using industry support

    Science.gov (United States)

    Key, K.

    2017-12-01

    Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.

  8. Calculation of the effective dose from natural radioactivity sources in soil using MCNP code

    International Nuclear Information System (INIS)

    Krstic, D.; Nikezic, D.

    2008-01-01

    Full text: Effective dose delivered by photon emitted from natural radioactivity in soil was calculated in this report. Calculations have been done for the most common natural radionuclides in soil as 238 U, 232 Th series and 40 K. A ORNL age-dependent phantom and the Monte Carlo transport code MCNP-4B were employed to calculate the energy deposited in all organs of phantom.The effective dose was calculated according to ICRP74 recommendations. Conversion coefficients of effective dose per air kerma were determined. Results obtained here were compared with other authors

  9. In-vessel source term analysis code TRACER version 2.3. User's manual

    International Nuclear Information System (INIS)

    Toyohara, Daisuke; Ohno, Shuji; Hamada, Hirotsugu; Miyahara, Shinya

    2005-01-01

    A computer code TRACER (Transport Phenomena of Radionuclides for Accident Consequence Evaluation of Reactor) version 2.3 has been developed to evaluate species and quantities of fission products (FPs) released into cover gas during a fuel pin failure accident in an LMFBR. The TRACER version 2.3 includes new or modified models shown below. a) Both model: a new model for FPs release from fuel. b) Modified model for FPs transfer from fuel to bubbles or sodium coolant. c) Modified model for bubbles dynamics in coolant. Computational models, input data and output data of the TRACER version 2.3 are described in this user's manual. (author)

  10. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  11. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  12. Bug-Fixing and Code-Writing: The Private Provision of Open Source Software

    DEFF Research Database (Denmark)

    Bitzer, Jürgen; Schröder, Philipp

    2002-01-01

    Open source software (OSS) is a public good. A self-interested individual would consider providing such software, if the benefits he gained from having it justified the cost of programming. Nevertheless each agent is tempted to free ride and wait for others to develop the software instead...

  13. SETMDC: Preprocessor for CHECKR, FIZCON, INTER, etc. ENDF Utility source codes

    International Nuclear Information System (INIS)

    Dunford, Charles L.

    2002-01-01

    Description of program or function: SETMDC-6.13 is a utility program that converts the source decks of the following set of programs to different computers: CHECKR-6.13; FIZCON-6.13; GETMAT-6.13; INTER-6.13; LISTEF-6; PLOTEF-6; PSYCHE-6; STANEF-6.13

  14. ON CODE REFACTORING OF THE DIALOG SUBSYSTEM OF CDSS PLATFORM FOR THE OPEN-SOURCE MIS OPENMRS

    Directory of Open Access Journals (Sweden)

    A. V. Semenets

    2016-08-01

    The open-source MIS OpenMRS developer tools and software API are reviewed. The results of code refactoring of the dialog subsystem of the CDSS platform which is made as module for the open-source MIS OpenMRS are presented. The structure of information model of database of the CDSS dialog subsystem was updated according with MIS OpenMRS requirements. The Model-View-Controller (MVC based approach to the CDSS dialog subsystem architecture was re-implemented with Java programming language using Spring and Hibernate frameworks. The MIS OpenMRS Encounter portlet form for the CDSS dialog subsystem integration is developed as an extension. The administrative module of the CDSS platform is recreated. The data exchanging formats and methods for interaction of OpenMRS CDSS dialog subsystem module and DecisionTree GAE service are re-implemented with help of AJAX technology via jQuery library

  15. An alternative technique for simulating volumetric cylindrical sources in the Morse code utilization

    International Nuclear Information System (INIS)

    Vieira, W.J.; Mendonca, A.G.

    1985-01-01

    In the solution of deep-penetration problems using the Monte Carlo method, calculation techniques and strategies are used in order to increase the particle population in the regions of interest. A common procedure is the coupling of bidimensional calculations, with (r,z) discrete ordinates transformed into source data, and tridimensional Monte Carlo calculations. An alternative technique for this procedure is presented. This alternative proved effective when applied to a sample problem. (F.E.) [pt

  16. Design and long-term monitoring of DSC/CIGS tandem solar module

    International Nuclear Information System (INIS)

    Vildanova, M F; Nikolskaia, A B; Kozlov, S S; Shevaleevskiy, O I

    2015-01-01

    This paper describes the design and development of tandem dye-sensitized/Cu(In, Ga)Se (DSC/CIGS) PV modules. The tandem PV module comprised of the top DSC module and a bottom commercial 0,8 m 2 CIGS module. The top DSC module was made of 10 DSC mini-modules with the field size of 20 × 20 cm 2 each. Tandem DSC/CIGS PV modules were used for providing the long-term monitoring of energy yield and electrical parameters in comparison with standalone CIGS modules under outdoor conditions. The outdoor test facility, containing solar modules of both types and a measurement unit, was located on the roof of the Institute of Biochemical Physics in Moscow. The data obtained during monitoring within the 2014 year period has shown the advantages of the designed tandem DSC/CIGS PV-modules over the conventional CIGS modules, especially for cloudy weather and low-intensity irradiation conditions. (paper)

  17. Advanced Neutron Source Dynamic Model (ANSDM) code description and user guide

    International Nuclear Information System (INIS)

    March-Leuba, J.

    1995-08-01

    A mathematical model is designed that simulates the dynamic behavior of the Advanced Neutron Source (ANS) reactor. Its main objective is to model important characteristics of the ANS systems as they are being designed, updated, and employed; its primary design goal, to aid in the development of safety and control features. During the simulations the model is also found to aid in making design decisions for thermal-hydraulic systems. Model components, empirical correlations, and model parameters are discussed; sample procedures are also given. Modifications are cited, and significant development and application efforts are noted focusing on examination of instrumentation required during and after accidents to ensure adequate monitoring during transient conditions

  18. Basic design of the HANARO cold neutron source using MCNP code

    International Nuclear Information System (INIS)

    Yu, Yeong Jin; Lee, Kye Hong; Kim, Young Jin; Hwang, Dong Gil

    2005-01-01

    The design of the Cold Neutron Source (CNS) for the HANARO research reactor is on progress. The CNS produces neutrons in the low energy range less than 5meV using liquid hydrogen at around 21.6 K as the moderator. The primary goal for the CNS design is to maximize the cold neutron flux with wavelengths of around 2 ∼ 12 A and to minimize the nuclear heat load. In this paper, the basic design of the HANARO CNS is described

  19. Personalized reminiscence therapy M-health application for patients living with dementia: Innovating using open source code repository.

    Science.gov (United States)

    Zhang, Melvyn W B; Ho, Roger C M

    2017-01-01

    Dementia is known to be an illness which brings forth marked disability amongst the elderly individuals. At times, patients living with dementia do also experience non-cognitive symptoms, and these symptoms include that of hallucinations, delusional beliefs as well as emotional liability, sexualized behaviours and aggression. According to the National Institute of Clinical Excellence (NICE) guidelines, non-pharmacological techniques are typically the first-line option prior to the consideration of adjuvant pharmacological options. Reminiscence and music therapy are thus viable options. Lazar et al. [3] previously performed a systematic review with regards to the utilization of technology to delivery reminiscence based therapy to individuals who are living with dementia and has highlighted that technology does have benefits in the delivery of reminiscence therapy. However, to date, there has been a paucity of M-health innovations in this area. In addition, most of the current innovations are not personalized for each of the person living with Dementia. Prior research has highlighted the utility for open source repository in bioinformatics study. The authors hoped to explain how they managed to tap upon and make use of open source repository in the development of a personalized M-health reminiscence therapy innovation for patients living with dementia. The availability of open source code repository has changed the way healthcare professionals and developers develop smartphone applications today. Conventionally, a long iterative process is needed in the development of native application, mainly because of the need for native programming and coding, especially so if the application needs to have interactive features or features that could be personalized. Such repository enables the rapid and cost effective development of application. Moreover, developers are also able to further innovate, as less time is spend in the iterative process.

  20. Nucleation in As2Se3 glass studied by DSC

    International Nuclear Information System (INIS)

    Svoboda, Roman; Málek, Jiří

    2014-01-01

    Highlights: • Nucleation behavior of As 2 Se 3 glass was studied by DSC in dependence on particle size. • Correlation between the enthalpies of fusion and crystallization were confirmed. • Apart from classical heterogeneous nucleation a second nucleation mechanism was found. • Rapid formation of crystallization centers from a damaged glassy structure occurs. • Mechanical defects seem to partially suppress the CNT nucleation process. - Abstract: Differential scanning calorimetry was used to study nucleation behavior in As 2 Se 3 glass, dependent on particle size. The nucleation process was examined for a series of different coarse powders; the nucleation rate was estimated from the proportion of the crystalline material fraction. The enthalpy of fusion was utilized in this respect, and a correlation between ΔH m and ΔH c was confirmed. Two mechanisms of nucleus formation were found: classical heterogeneous nucleation (following CNT) and so-called “activation” of mechanically-induced defects. The latter appears to represent rapid formation of crystallization centers from a damaged glassy structure, where complete saturation occurs for fine powders in the range of 195–235 °C. A high amount of mechanical defects, on the other hand, was found to partially suppress the CNT nucleation process

  1. Thermal analysis and safety information for metal nanopowders by DSC

    Energy Technology Data Exchange (ETDEWEB)

    Tseng, J.M.; Huang, S.T. [Institute of Safety and Disaster Prevention Technology, Central Taiwan University of Science and Technology, 666, Buzih Road, Beitun District, Taichung 40601, Taiwan, ROC (China); Duh, Y.S.; Hsieh, T.Y.; Sun, Y.Y. [Department of Safety Health and Environmental Engineering, National United University, Miaoli, Taiwan, ROC (China); Lin, J.Z. [Institute of Safety and Disaster Prevention Technology, Central Taiwan University of Science and Technology, 666, Buzih Road, Beitun District, Taichung 40601, Taiwan, ROC (China); Wu, H.C. [Institute of Occupational Safety and Health, Council of Labor Affairs, Taipei, Taiwan, ROC (China); Kao, C.S., E-mail: jcsk@nuu.edu.tw [Department of Safety Health and Environmental Engineering, National United University, Miaoli, Taiwan, ROC (China)

    2013-08-20

    Highlights: • Metal nanopowders are common and frequently employed in industry. • Nano iron powder experimental results of T{sub o} were 140–150 °C. • Safety information can benefit relevant metal powders industries. - Abstract: Metal nanopowders are common and frequently employed in industry. Iron is mostly applied in high-performance magnetic materials and pollutants treatment for groundwater. Zinc is widely used in brass, bronze, die casting metal, alloys, rubber, and paints, etc. Nonetheless, some disasters induced by metal powders are due to the lack of related safety information. In this study, we applied differential scanning calorimetry (DSC) and used thermal analysis software to evaluate the related thermal safety information, such as exothermic onset temperature (T{sub o}), peak of temperature (T{sub p}), and heat of reaction (ΔH). The nano iron powder experimental results of T{sub o} were 140–150 °C, 148–158 °C, and 141–149 °C for 15 nm, 35 nm, and 65 nm, respectively. The ΔH was larger than 3900 J/g, 5000 J/g, and 3900 J/g for 15 nm, 35 nm, and 65 nm, respectively. Safety information can benefit the relevant metal powders industries for preventing accidents from occurring.

  2. Self characterization of a coded aperture array for neutron source imaging

    Energy Technology Data Exchange (ETDEWEB)

    Volegov, P. L., E-mail: volegov@lanl.gov; Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H. [Los Alamos National Laboratory, Los Alamos, New Mexico 87544 (United States); Fittinghoff, D. N. [Livermore National Laboratory, Livermore, California 94550 (United States)

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  3. Delaunay Tetrahedralization of the Heart Based on Integration of Open Source Codes

    International Nuclear Information System (INIS)

    Pavarino, E; Neves, L A; Machado, J M; Momente, J C; Zafalon, G F D; Pinto, A R; Valêncio, C R; Godoy, M F de; Shiyou, Y; Nascimento, M Z do

    2014-01-01

    The Finite Element Method (FEM) is a way of numerical solution applied in different areas, as simulations used in studies to improve cardiac ablation procedures. For this purpose, the meshes should have the same size and histological features of the focused structures. Some methods and tools used to generate tetrahedral meshes are limited mainly by the use conditions. In this paper, the integration of Open Source Softwares is presented as an alternative to solid modeling and automatic mesh generation. To demonstrate its efficiency, the cardiac structures were considered as a first application context: atriums, ventricles, valves, arteries and pericardium. The proposed method is feasible to obtain refined meshes in an acceptable time and with the required quality for simulations using FEM

  4. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders.   Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partne...

  5. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders. Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partners.

  6. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104)

    International Nuclear Information System (INIS)

    Kress, T.S.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time

  7. Investigation on caloric requirement of biomass pyrolysis using TG-DSC analyzer

    Energy Technology Data Exchange (ETDEWEB)

    He Fang [Institute of Utilization of Biomass, Shandong University of Technology, No. 12, Zhangzhou Road, Zibo, Shandong 255049 (China)]. E-mail: hf@sdut.edu.cn; Yi Weiming [Institute of Utilization of Biomass, Shandong University of Technology, No. 12, Zhangzhou Road, Zibo, Shandong 255049 (China); Bai Xueyuan [Institute of Utilization of Biomass, Shandong University of Technology, No. 12, Zhangzhou Road, Zibo, Shandong 255049 (China)

    2006-09-15

    The caloric requirement of biomass pyrolysis has an important influence on the course of the thermal conversion. However, precise data are difficult to achieve by the current calculation method because of the complexity of the process. A new method for achieving the caloric requirement of the process by integrating the differential scanning calorimetry (DSC) curves was proposed after the simultaneous thermal analyzer (TG-DSC) and DSC curves were investigated. Experiments were conducted for wheat straw, cotton stalk, pine and peanut shell on a Netsch STA 449C analyzer. Powder samples were put into a platinum crucible with a lid on a high accuracy DSC-cp sample holder in the furnace and then heated from ambient temperature up to the maximum temperature of 973 K at the heating rate of 10 K/min in the analyzer. The product gases were swept away by 25 ml/min nitrogen. Mass changes (TG) and calorimetric effects (DSC) were recorded and analyzed. The process was investigated in detail through comparison of the DTG (differential thermogravimetric) and DSC curves of wheat straw. After the water influence in the DSC was eliminated, the relationship of the caloric requirement with the temperature of the aforementioned dry biomass was obtained by integrating the DSC curve. The results showed that 523 kJ, 459 kJ, 646 kJ and 385 kJ were required, respectively, to increase the temperature of 1 kg of dried wheat straw, cotton stalk, pine and peanut from 303 K to 673 K.

  8. Implementation of DSC model and application for analysis of field pile tests under cyclic loading

    Science.gov (United States)

    Shao, Changming; Desai, Chandra S.

    2000-05-01

    The disturbed state concept (DSC) model, and a new and simplified procedure for unloading and reloading behavior are implemented in a nonlinear finite element procedure for dynamic analysis for coupled response of saturated porous materials. The DSC model is used to characterize the cyclic behavior of saturated clays and clay-steel interfaces. In the DSC, the relative intact (RI) behavior is characterized by using the hierarchical single surface (HISS) plasticity model; and the fully adjusted (FA) behavior is modeled by using the critical state concept. The DSC model is validated with respect to laboratory triaxial tests for clay and shear tests for clay-steel interfaces. The computer procedure is used to predict field behavior of an instrumented pile subjected to cyclic loading. The predictions provide very good correlation with the field data. They also yield improved results compared to those from a HISS model with anisotropic hardening, partly because the DSC model allows for degradation or softening and interface response.

  9. Assessment of gamma irradiation heating and damage in miniature neutron source reactor vessel using computational methods and SRIM - TRIM code

    International Nuclear Information System (INIS)

    Appiah-Ofori, F. F.

    2014-07-01

    The Effects of Gamma Radiation Heating and Irradiation Damage in the reactor vessel of Ghana Research Reactor 1, Miniature Neutron Source Reactor were assessed using Implicit Control Volume Finite Difference Numerical Computation and validated by SRIM - TRIM Code. It was assumed that 5.0 MeV of gamma rays from the reactor core generate heat which interact and absorbed completely by the interior surface of the MNSR vessel which affects it performance due to the induced displacement damage. This displacement damage is as result of lattice defects being created which impair the vessel through formation of point defect clusters such as vacancies and interstitiaIs which can result in dislocation loops and networks, voids and bubbles and causing changes in the layers in the thickness of the vessel. The microscopic defects produced in the vessel due to γ - radiation damage are referred to as radiation damage while the defects thus produced modify the macroscopic properties of the vessel which are also known as the radiation effects. These radiation damage effects are of major concern for materials used in nuclear energy production. In this study, the overall objective was to assess the effects of gamma radiation heating and damage in GHARR - I MNSR vessel by a well-developed Mathematical model, Analytical and Numerical solutions, simulating the radiation damage in the vessel. SRIM - TRIM Code was used as a computational tool to determine the displacement per atom (dpa) associated with radiation damage while implicit Control Volume Finite Difference Method was used to determine the temperature profile within the vessel due to γ - radiation heating respectively. The methodology adopted in assessing γ - radiation heating in the vessel involved development of the One-Dimensional Steady State Fourier Heat Conduction Equation with Volumetric Heat Generation both analytical and implicit Control Volume Finite Difference Method approach to determine the maximum temperature and

  10. HELIOS: An Open-source, GPU-accelerated Radiative Transfer Code for Self-consistent Exoplanetary Atmospheres

    Science.gov (United States)

    Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin

    2017-02-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).

  11. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    International Nuclear Information System (INIS)

    O'Connor, Evan; Ott, Christian D

    2010-01-01

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M o-dot zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  12. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    Energy Technology Data Exchange (ETDEWEB)

    O' Connor, Evan; Ott, Christian D, E-mail: evanoc@tapir.caltech.ed, E-mail: cott@tapir.caltech.ed [TAPIR, Mail Code 350-17, California Institute of Technology, Pasadena, CA 91125 (United States)

    2010-06-07

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M{sub o-dot} zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  13. Documentation of Source Code.

    Science.gov (United States)

    1988-05-12

    the "load IC" menu option. A prompt will appear in the typescript window requesting the name of the knowledge base to be loaded. Enter...highlighted and then a prompt will appear in the typescript window. The prompt will be requesting the name of the file containing the message to be read in...the file name, the system will begin reading in the message. The listified message is echoed back in the typescript window. After that, the screen

  14. Implementation of inter-unit analysis for C and C++ languages in a source-based static code analyzer

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The proliferation of automated testing capabilities arises a need for thorough testing of large software systems, including system inter-component interfaces. The objective of this research is to build a method for inter-procedural inter-unit analysis, which allows us to analyse large and complex software systems including multi-architecture projects (like Android OS as well as to support complex assembly systems of projects. Since the selected Clang Static Analyzer uses source code directly as input data, we need to develop a special technique to enable inter-unit analysis for such analyzer. This problem is of special nature because of C and C++ language features that assume and encourage the separate compilation of project files. We describe the build and analysis system that was implemented around Clang Static Analyzer to enable inter-unit analysis and consider problems related to support of complex projects. We also consider the task of merging abstract source trees of translation units and its related problems such as handling conflicting definitions, complex build systems and complex projects support, including support for multi-architecture projects, with examples. We consider both issues related to language design and human-related mistakes (that may be intentional. We describe some heuristics that were used for this work to make the merging process faster. The developed system was tested using Android OS as the input to show it is applicable even for such complicated projects. This system does not depend on the inter-procedural analysis method and allows the arbitrary change of its algorithm.

  15. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons

    International Nuclear Information System (INIS)

    Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil

    2013-01-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV

  16. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  17. The Feasibility of Multidimensional CFD Applied to Calandria System in the Moderator of CANDU-6 PHWR Using Commercial and Open-Source Codes

    Directory of Open Access Journals (Sweden)

    Hyoung Tae Kim

    2016-01-01

    Full Text Available The moderator system of CANDU, a prototype of PHWR (pressurized heavy-water reactor, has been modeled in multidimension for the computation based on CFD (computational fluid dynamics technique. Three CFD codes are tested in modeled hydrothermal systems of heavy-water reactors. Commercial codes, COMSOL Multiphysics and ANSYS-CFX with OpenFOAM, an open-source code, are introduced for the various simplified and practical problems. All the implemented computational codes are tested for a benchmark problem of STERN laboratory experiment with a precise modeling of tubes, compared with each other as well as the measured data and a porous model based on the experimental correlation of pressure drop. Also the effect of turbulence model is discussed for these low Reynolds number flows. As a result, they are shown to be successful for the analysis of three-dimensional numerical models related to the calandria system of CANDU reactors.

  18. Influência de alguns parâmetros experimentais nos resultados de análises calorimétricas diferenciais - DSC Influence of some experimental parameters on the results of differential scanning calorimetry - DSC.

    OpenAIRE

    Cláudia Bernal; Andréa Boldarini Couto; Susete Trazzi Breviglieri; Éder Tadeu Gomes Cavalheiro

    2002-01-01

    A series of experiments were performed in order to demonstrate to undergraduate students or users of the differential scanning calorimetry (DSC), that several factors can influence the qualitative and quantitative aspects of DSC results. Saccharin, an artificial sweetner, was used as a probe and its thermal behavior is also discussed on the basis of thermogravimetric (TG) and DSC curves.

  19. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    Science.gov (United States)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  20. Determination of melting point of vegetable oils and fats by differential scanning calorimetry (DSC technique.

    Directory of Open Access Journals (Sweden)

    Nassu, Renata Tieko

    1999-02-01

    Full Text Available Melting point of fats is used to characterize oils and fats and is related to their physical properties, such as hardness and thermal behaviour. The present work shows the utilization of DSC technique on the determination of melting point of fats. In a comparison with softening point (AOCS method Cc 3-25, DSC values were higher than those obtained by AOCS method. It has occurred due to the fact that values obtained by DSC technique were taken when the fat had melted completely. DSC was also useful for determining melting point of liquid oils, such as soybean and cottonseed ones.

    El punto de fusión de grasas es usado para caracterizar aceites y grasas, y está relacionado con sus propiedades físicas, tales como dureza y comportamiento térmico. El presente trabajo muestra la utilización de la técnica de Calorimetría Diferencial de Barrido (DSC en la determinación del punto de fusión de grasas. En comparación con el punto de ablandamiento (AOCS método Cc 3-25, los valores de DSC fueron más altos que los obtenidos por los métodos de AOCS. Esto ha ocurrido debido al hecho que los valores obtenidos por la técnica de DSC fueron tomados cuando la grasa había fundido completamente. DSC fue también útil para determinar puntos de fusión de aceites líquidos, tales como los de soya y algodón.

  1. RheoDSC: A hyphenated technique for the simultaneous measurement of calorimetric and rheological evolutions

    OpenAIRE

    Kiewiet, S; Janssens, V; Miltner, H. E; Van Assche, Gert; Van Puyvelde, Peter; Van Mele, B

    2008-01-01

    A newly developed hyphenated technique is presented combining an existing rheometer and differential scanning calorimeter into a single experimental setup. Through the development of a fixation accessory for differential scanning calorimeter (DSC) crucibles and a novel rotor, the simultaneous measurement is performed inside the well-controlled thermal environment of a Tzero (TM) DSC cell. Hence, the evolution of thermal and flow properties of a material can be simultaneously measured using st...

  2. TOPEM, a new temperature modulated DSC technique - Application to the glass transition of polymers

    OpenAIRE

    Fraga Rivas, Iria; Montserrat Ribas, Salvador; Hutchinson, John M.

    2007-01-01

    TOPEM is a new temperature modulated DSC technique, introduced by Mettler-Toledo in late 2005, in which stochastic temperature modulations are superimposed on the underlying rate of a conventional DSC scan. These modulations consist of temperature pulses, of fixed magnitude and alternating sign, with random durations within limits specified by the user. The resulting heat flow signal is analysed by a parameter estimation method which yields a so-called ‘quasi-static’ specific heat capac...

  3. Recent advances and potential applications of modulated differential scanning calorimetry (mDSC) in drug development.

    Science.gov (United States)

    Knopp, Matthias Manne; Löbmann, Korbinian; Elder, David P; Rades, Thomas; Holm, René

    2016-05-25

    Differential scanning calorimetry (DSC) is frequently the thermal analysis technique of choice within preformulation and formulation sciences because of its ability to provide detailed information about both the physical and energetic properties of a substance and/or formulation. However, conventional DSC has shortcomings with respect to weak transitions and overlapping events, which could be solved by the use of the more sophisticated modulated DSC (mDSC). mDSC has multiple potential applications within the pharmaceutical field and the present review provides an up-to-date overview of these applications. It is aimed to serve as a broad introduction to newcomers, and also as a valuable reference for those already practising in the field. Complex mDSC was introduced more than two decades ago and has been an important tool for the quantification of amorphous materials and development of freeze-dried formulations. However, as discussed in the present review, a number of other potential applications could also be relevant for the pharmaceutical scientist. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Validation of the MCNP-DSP Monte Carlo code for calculating source-driven noise parameters of subcritical systems

    International Nuclear Information System (INIS)

    Valentine, T.E.; Mihalczo, J.T.

    1995-01-01

    This paper describes calculations performed to validate the modified version of the MCNP code, the MCNP-DSP, used for: the neutron and photon spectra of the spontaneous fission of californium 252; the representation of the detection processes for scattering detectors; the timing of the detection process; and the calculation of the frequency analysis parameters for the MCNP-DSP code

  5. A new open-source pin power reconstruction capability in DRAGON5 and DONJON5 neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Chambon, R., E-mail: richard-pierre.chambon@polymtl.ca; Hébert, A., E-mail: alain.hebert@polymtl.ca

    2015-08-15

    In order to better optimize the fuel energy efficiency in PWRs, the burnup distribution has to be known as accurately as possible, ideally in each pin. However, this level of detail is lost when core calculations are performed with homogenized cross-sections. The pin power reconstruction (PPR) method can be used to get back those levels of details as accurately as possible in a small additional computing time frame compared to classical core calculations. Such a de-homogenization technique for core calculations using arbitrarily homogenized fuel assembly geometries was presented originally by Fliscounakis et al. In our work, the same methodology was implemented in the open-source neutronic codes DRAGON5 and DONJON5. The new type of Selengut homogenization, called macro-calculation water gap, also proposed by Fliscounakis et al. was implemented. Some important details on the methodology were emphasized in order to get precise results. Validation tests were performed on 12 configurations of 3×3 clusters where simulations in transport theory and in diffusion theory followed by pin-power reconstruction were compared. The results shows that the pin power reconstruction and the Selengut macro-calculation water gap methods were correctly implemented. The accuracy of the simulations depends on the SPH method and on the homogenization geometry choices. Results show that the heterogeneous homogenization is highly recommended. SPH techniques were investigated with flux-volume and Selengut normalization, but the former leads to inaccurate results. Even though the new Selengut macro-calculation water gap method gives promising results regarding flux continuity at assembly interfaces, the classical Selengut approach is more reliable in terms of maximum and average errors in the whole range of configurations.

  6. A study on Prediction of Radioactive Source-term from the Decommissioning of Domestic NPPs by using CRUDTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Lee, Sang Heon; Cho, Hoon Jo [Department of Nuclear Engineering Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    For the study, the behavior mechanism of corrosion products in the primary system of the Kori no.1 was analyzed, and the volume of activated corrosion products in the primary system was assessed based on domestic plant data with the CRUDTRAN code used to predict the volume. It is expected that the study would be utilized in predicting radiation exposure of workers performing maintenance and repairs in high radiation areas and in selecting the process of decontaminations and decommissioning in the primary system. It is also expected that in the future it would be used as the baseline data to estimate the volume of radioactive wastes when decommissioning a nuclear plant in the future, which would be an important criterion in setting the level of radioactive wastes used to compute the quantity of radioactive wastes. The results of prediction of the radioactive nuclide inventory in the primary system performed in this study would be used as baseline data for the estimation of the volume of radioactive wastes when decommissioning NPPs in the future. It is also expected that the data would be important criteria used to classify the level of radioactive wastes to calculate the volume. In addition, it is expected that the data would be utilized in reducing radiation exposure of workers in charge of system maintenance and repairing in high radiation zones and also predicting the selection of decontaminations and decommissioning processes in the primary systems. In future researches, it is planned to conduct the source term assessment against other NPP types such as CANDU and OPR-1000, in addition to the Westinghouse type nuclear plants.

  7. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult

  8. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  9. The IAEA code of conduct on the safety of radiation sources and the security of radioactive materials. A step forwards or backwards?

    International Nuclear Information System (INIS)

    Boustany, K.

    2001-01-01

    About the finalization of the Code of Conduct on the Safety and Security of radioactive Sources, it appeared that two distinct but interrelated subject areas have been identified: the prevention of accidents involving radiation sources and the prevention of theft or any other unauthorized use of radioactive materials. What analysis reveals is rather that there are gaps in both the content of the Code and the processes relating to it. Nevertheless, new standards have been introduced as a result of this exercise and have thus, as an enactment of what constitutes appropriate behaviour in the field of the safety and security of radioactive sources, emerged into the arena of international relations. (N.C.)

  10. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Yidong Xia; Mitch Plummer; Robert Podgorney; Ahmad Ghassemi

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation angle for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.

  11. Nano-DTA and nano-DSC with cantilever-type calorimeter

    International Nuclear Information System (INIS)

    Nakabeppu, Osamu; Deno, Kohei

    2016-01-01

    Highlights: • Nanocalorimetry with original cantilever type calorimeters. • The calorimeters showed the enthalpy resolution of 200 nJ level. • Nano-DTA of a binary alloy captured a probabilistic peak after solidification. • Power compensation DSC of a microgram level sample was demonstrated. • The DSC and DTA behavior were explained with a lumped model. - Abstract: Differential thermal analysis (DTA) and differential scanning calorimetry (DSC) of the minute samples in the range of microgram to nanogram were studied using original cantilever-type calorimeters. The micro-fabricated calorimeter with a heater and thermal sensors was able to perform a fast temperature scan at above 1000 K/s and a high-resolution heat measurement. The DTA of minuscule metal samples demonstrated some advances such as the thermal analysis of a 20 ng level indium and observation of a strange phase transition of a binary alloy. The power compensation type DSC using a thermal feedback system was also performed. Thermal information of a microgram level sample was observed as splitting into the DSC and DTA signals because of a mismatch between the sample and the calorimeter. Although there remains some room for improvement in terms of the heat flow detection, the behavior of the compensation system in the DSC was theoretically understood through a lumped model. Those experiments also produced some findings, such as a fin effect with sample loading, a measurable weight range, a calibration of the calorimeter and a product design concept. The development of the nano-DTA and nano-DSC will enable breakthroughs for the fast calorimetry of the microscopic size samples.

  12. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  13. Evaluation of the methodology for dose calculation in microdosimetry with electrons sources using the MCNP5 Code

    International Nuclear Information System (INIS)

    Cintra, Felipe Belonsi de

    2010-01-01

    This study made a comparison between some of the major transport codes that employ the Monte Carlo stochastic approach in dosimetric calculations in nuclear medicine. We analyzed in detail the various physical and numerical models used by MCNP5 code in relation with codes like EGS and Penelope. The identification of its potential and limitations for solving microdosimetry problems were highlighted. The condensed history methodology used by MCNP resulted in lower values for energy deposition calculation. This showed a known feature of the condensed stories: its underestimates both the number of collisions along the trajectory of the electron and the number of secondary particles created. The use of transport codes like MCNP and Penelope for micrometer scales received special attention in this work. Class I and class II codes were studied and their main resources were exploited in order to transport electrons, which have particular importance in dosimetry. It is expected that the evaluation of available methodologies mentioned here contribute to a better understanding of the behavior of these codes, especially for this class of problems, common in microdosimetry. (author)

  14. Validation of the Open Source Code_Aster Software Used in the Modal Analysis of the Fluid-filled Cylindrical Shell

    Directory of Open Access Journals (Sweden)

    B D. Kashfutdinov

    2017-01-01

    Full Text Available The paper deals with a modal analysis of the elastic cylindrical shell with a clamped bottom partially filled with fluid in open source Code_Aster software using the finite element method. Natural frequencies and modes obtained in Code_Aster are compared to experimental and theoretical data. The aim of this paper is to prove that Code_Aster has all necessary tools for solving fluid structure interaction problems. Also, Code_Aster can be used in the industrial projects as an alternative to commercial software. The available free pre- and post-processors with a graphical user interface that is compatible with Code_Aster allow creating complex models and processing the results.The paper presents new validation results of open source Code_Aster software used to calculate small natural modes of the cylindrical shell partially filled with non-viscous compressible barotropic fluid under gravity field.The displacement of the middle surface of thin shell and the displacement of the fluid relative to the equilibrium position are described by coupled hydro-elasticity problem. The fluid flow is considered to be potential. The finite element method (FEM is used. The features of computational model are described. The resolution equation has symmetrical block matrices. To compare the results, is discussed the well-known modal analysis problem of cylindrical shell with flat non-deformable bottom, filled with a compressible fluid. The numerical parameters of the scheme were chosen in accordance with well-known experimental and analytical data. Three cases were taken into account: an empty, a partially filled and a full-filled cylindrical shell.The frequencies of Code_Aster are in good agreement with those, obtained in experiment, analytical solution, as well as with results obtained by FEM in other software. The difference between experiment and analytical solution in software is approximately the same. The obtained results extend a set of validation tests for

  15. Comparative study by TG and DSC Of membranes polyamide66/bentonite clay nanocomposite; Estudo comparativo por TG e DSC de membranas de nanocompositos poliamida66/argila bentonitica

    Energy Technology Data Exchange (ETDEWEB)

    Medeiros, K.M. de; Kojuch, L R; Araujo, E M; Lira, H.L., E-mail: keilamm@ig.com.b [Universidade Federal de Campina Grande (UFCG), PB (Brazil). Unidade Academica de Engenharia de Materiais; Lima, F [Universidade Estadual da Paraiba (UEPB), Campina Grande, PB (Brazil). Dept. de Quimica

    2010-07-01

    In this study, it was obtained membranes of nanocomposites polyamide66 with 3 and 5% bentonite clay consists of silicates in layers from the interior of Paraiba. The clay was treated with a quaternary ammonium salt in order to make it organophilic. The membranes were prepared by phase inversion technique from the nanocomposites in solution. The clays were characterized by X-ray diffraction (XRD) and thermogravimetry (TG). Also the membranes were characterized by differential scanning calorimetry (DSC) and TG. The XRD and TG confirmed the presence of salt in the clay and thermal stability of the treated clay. For DSC, it was observed that there was no change in melting temperature of the membranes of nanocomposites compared to membrane pure polyamide66. By TG, it was found that the decomposition of the membranes of polyamide66 with treated clay were higher compared with the untreated clay. (author)

  16. Fast-Scan DSC and its role in pharmaceutical physical form characterisation and selection.

    Science.gov (United States)

    Ford, James L; Mann, Timothy E

    2012-04-01

    Conventional rate Differential Scanning Calorimetry (DSC) has been used for many years as a tool in the analysis of pharmaceutical materials. In recent years an extension of the technique to include fast heating and cooling rates has become more prevalent. Broadly termed Fast-Scan DSC, this review examines the current applications of this technique to the characterisation and selection of pharmaceutical materials. Its increasing use encompasses the characterisation of amorphousness in crystalline materials, the characterisation of polymorphs and polymorphic transitions, the solubility of drugs in polymers, and characterisation of dosage forms. Notwithstanding the advantages of analytical speed in analytical turnover, the review emphasises the advantages of Fast-Scan DSC in its sensitivity which allows the separation of overlapping thermal events, the reduction it provides in degradation during the scanning process and its role in determining solubility in waxy and polymeric based systems. A comparison of the uses of Fast-Scan DSC to modulated DSC techniques and localised thermal analysis is also given. © 2011 Elsevier B.V. All rights reserved.

  17. DSC: software tool for simulation-based design of control strategies applied to wastewater treatment plants.

    Science.gov (United States)

    Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2011-01-01

    This paper presents a computer tool called DSC (Simulation based Controllers Design) that enables an easy design of control systems and strategies applied to wastewater treatment plants. Although the control systems are developed and evaluated by simulation, this tool aims to facilitate the direct implementation of the designed control system to the PC of the full-scale WWTP (wastewater treatment plants). The designed control system can be programmed in a dedicated control application and can be connected to either the simulation software or the SCADA of the plant. To this end, the developed DSC incorporates an OPC server (OLE for process control) which facilitates an open-standard communication protocol for different industrial process applications. The potential capabilities of the DSC tool are illustrated through the example of a full-scale application. An aeration control system applied to a nutrient removing WWTP was designed, tuned and evaluated with the DSC tool before its implementation in the full scale plant. The control parameters obtained by simulation were suitable for the full scale plant with only few modifications to improve the control performance. With the DSC tool, the control systems performance can be easily evaluated by simulation. Once developed and tuned by simulation, the control systems can be directly applied to the full-scale WWTP.

  18. Estimation of hydrogen bondings in coal utilizing FTir and differential scanning calorimetry (DSC); FTir to DSC wo mochiita sekitannai suiso ketsugo no teiryoteki hyoka no kokoromi

    Energy Technology Data Exchange (ETDEWEB)

    Mae, K.; Miura, K. [Kyoto University, Kyoto (Japan). Faculty of Engineering

    1996-10-28

    With an objective to know coal condensation structure which has influence on coal conversion reaction, an attempt was made on quantitative evaluation of hydrogen bonding in coal. Using as test samples the VDC made from Taiheiyo coal swollen by tetralin and vacuum-dried, and its pyrolyzed char, DSC measurement and Fourier transform infrared spectroscopy (FT) were performed. An FT spectrum comparison revealed that the VDC swollen at 220{degree}C has the hydrogen bonding relaxed partly from the original coal. However, since the change is in a huge coal molecular structure restraining space, it has stopped at relaxation of the bonding energy without causing separation as far as free radicals. On the other hand, the DSC curve shows that the VDC has slower endothermic velocity than the original coal. In other words, the difference in heat absorption amounts in both materials is equivalent to the difference of enthalpy ({Delta} H) of both materials, which corresponds to the relaxation of the hydrogen bonding. Therefore, the {Delta} H was related to wavenumber shift of the FT spectra (which corresponds to change in the hydrogen bonding condition). By using this relationship, a method for evaluating hydrogen bonding distribution was proposed from an O-H contracting vibration change that can be measured by using the FT spectra and a thermal change that can be measured by using the DSC. 3 refs., 7 figs.

  19. A discussion of the principles and applications of Modulated Temperature DSC (MTDSC).

    Science.gov (United States)

    Verdonck, E; Schaap, K; Thomas, L C

    1999-12-01

    The benefits of Modulated Temperature DSC (MTDSC) over conventional differential scanning calorimetry (DSC) for studying thermal transitions in materials are reviewed by means of examples. These include the separation of overlapping phenomena such as melting/recrystallization in semi-crystalline materials, the heat capacity variation and enthalpic relaxation at the glass transition, and transitions from the different components of a blend. In addition, examples are presented demonstrating the ability of MTDSC to detect subtle transitions more readily and without loss of resolution. The possibility of measuring heat capacity in quasi-isothermal conditions and the evaluation of the thermal conductivity of a material are explained.

  20. Calorimetric sensitivity and thermal resolution of a novel miniaturized ceramic DSC chip in LTCC technology

    Energy Technology Data Exchange (ETDEWEB)

    Missal, Wjatscheslaw, E-mail: wmissal@gmx.net [Department of Functional Materials, University of Bayreuth, 95440 Bayreuth (Germany); Kita, Jaroslaw [Department of Functional Materials, University of Bayreuth, 95440 Bayreuth (Germany); Wappler, Eberhard [wsk Mess- und Datentechnik GmbH, Gueterbahnhofstr. 1, 63450 Hanau (Germany); Bechtold, Franz [VIA electronic GmbH, Robert-Friese-Str. 3, 07629 Hermsdorf (Germany); Moos, Ralf [Department of Functional Materials, University of Bayreuth, 95440 Bayreuth (Germany)

    2012-09-10

    Highlights: Black-Right-Pointing-Pointer Unique vertical design of a DSC device manufactured in the low-cost LTCC technology and therefore capable of one-way use. Black-Right-Pointing-Pointer Fully functional DSC device with a size of only 1.5 mm Multiplication-Sign 11 mm Multiplication-Sign 39 mm enabling very low power consumption. Black-Right-Pointing-Pointer Comparable measurement performance as conventional DSC whilst also suitable for mobile thermal analysis. Black-Right-Pointing-Pointer Thermal resolution is 0.12 (TAWN test). Repeatability of the peak area is within 0.3% for indium samples. Black-Right-Pointing-Pointer Calorimetric sensitivity: linear with regard to temperature and independent from sample mass and heating rate in wide ranges. - Abstract: The calorimetric properties of a novel miniaturized ceramic differential scanning calorimeter device (MC-DSC) with integrated heater and crucible are presented. All features of a conventional DSC apparatus (including oven) are integrated into this DSC device of the size 11 mm Multiplication-Sign 39 mm Multiplication-Sign 1.5 mm. The MC-DSC device is suitable for one-way use, since it is fully manufactured in the low-cost planar low temperature co-fired ceramics technology. First characterization of this device is performed using indium, tin and zinc samples. The calorimetric sensitivity at 156.6 Degree-Sign C is 0.24 J/ Degree-Sign C s. It depends linearly on temperature in the range of at least 150 Degree-Sign C and 420 Degree-Sign C. The calorimetric sensitivity is constant up to an enthalpy of fusion of at least {Delta}H = 750 mJ (at 156.6 Degree-Sign C). The thermal analysis of indium in direct contact to the crucible of the chip even reveals a constant calorimetric sensitivity up to an enthalpy of fusion of at least {Delta}H = 1000 mJ. The repeatability of the peak area is within {+-}0.3% (11 mg indium, 10 measurements). The thermal resolution determined using 4,4 Prime -azoxyanisole under TAWN test

  1. Matlab Source Code for Species Transport through Nafion Membranes in Direct Ethanol, Direct Methanol, and Direct Glucose Fuel Cells

    OpenAIRE

    JH, Summerfield; MW, Manley

    2016-01-01

    A simple simulation of chemical species movement is presented. The species traverse a Nafion membrane in a fuel cell. Three cells are examined: direct methanol, direct ethanol, and direct glucose. The species are tracked using excess proton concentration, electric field strength, and voltage. The Matlab computer code is provided.

  2. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  3. Proof of Concept Coded Aperture Miniature Mass Spectrometer Using a Cycloidal Sector Mass Analyzer, a Carbon Nanotube (CNT) Field Emission Electron Ionization Source, and an Array Detector

    Science.gov (United States)

    Amsden, Jason J.; Herr, Philip J.; Landry, David M. W.; Kim, William; Vyas, Raul; Parker, Charles B.; Kirley, Matthew P.; Keil, Adam D.; Gilchrist, Kristin H.; Radauscher, Erich J.; Hall, Stephen D.; Carlson, James B.; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T.; Russell, Zachary E.; Grego, Sonia; Edwards, Steven J.; Sperline, Roger P.; Denton, M. Bonner; Stoner, Brian R.; Gehm, Michael E.; Glass, Jeffrey T.

    2018-02-01

    Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified.

  4. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    Science.gov (United States)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  5. MPEG-compliant joint source/channel coding using discrete cosine transform and substream scheduling for visual communication over packet networks

    Science.gov (United States)

    Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.

    2001-01-01

    Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.

  6. Thermal analysis on parchments I: DSC and TGA combined approach for heat damage assessment

    DEFF Research Database (Denmark)

    Fessas, D.; Signorelli, M.; Schiraldi, A.

    2006-01-01

    Ancient, new and artificially aged parchments were investigated with both differential scanning calorimetry (DSC) and thermogravimetry (TGA). Criteria to define a quantitative ranking of the damage experienced by the bulk collagen of historical parchments were assessed. A damage-related correlation...

  7. Morphology evaluation of biodegradable copolyesters based on dimerized fatty acid studied by DSC, SAXS and WAXS

    Czech Academy of Sciences Publication Activity Database

    Kozlowska, A.; Gromadzki, Daniel; El Fray, M.; Štěpánek, Petr

    2008-01-01

    Roč. 16, č. 6 (2008), s. 85-88 ISSN 1230-3666 Institutional research plan: CEZ:AV0Z40500505 Keywords : multiblock copolymers * DSC * WAXS Subject RIV: CD - Macromolecular Chemistry Impact factor: 0.439, year: 2008

  8. In Situ Stability of Substrate-Associated Cellulases Studied by DSC

    DEFF Research Database (Denmark)

    Borch, Kim; Cruys-Bagger, Nicolaj; Badino, Silke Flindt

    2014-01-01

    This work shows that differential scanning calorimetry (DSC) can be used to monitor the stability of substrate-adsorbed cellulases during long-term hydrolysis of insoluble cellulose. Thermal transitions of adsorbed enzyme were measured regularly in subsets of a progressing hydrolysis, and the size...

  9. Rapid examination of the kinetic process of intramolecular lactamization of gabapentin using DSC-FTIR

    International Nuclear Information System (INIS)

    Hsu, C.-H.; Lin, S.-Y.

    2009-01-01

    The thermal stability and thermodynamics of gabapentin (GBP) in the solid state were investigated by DSC and TG techniques, and FTIR microspectroscopy. The detailed intramolecular lactamization process of GBP to form gabapentin-lactam (GBP-L) was also determined by thermal FTIR microspectroscopy. GBP exhibited a DSC endothermic peak at 169 deg. C. The weight loss in TG curve of GBP suggested that the evaporation process of water liberated via intramolecular lactamization was simultaneously combined with the evaporation process of GBP-L having a DSC endothermic peak at 91 deg. C. A thermal FTIR microspectroscopy clearly evidenced the IR spectra at 3350 cm -1 for water liberated and at 1701 cm -1 for lactam structure formed due to the lactam formation of GBP. This study indicates that the activation energy for combined processes of intramolecular lactamization of GBP and evaporation of GBP-L was about 114.3 ± 23.3 kJ/mol, but for the evaporation of GBP-L alone was 76.2 ± 1.5 kJ/mol. A powerful simultaneous DSC-FTIR combined technique was easily used to quickly examine the detailed kinetic processes of intramolecular cyclization of GPB and evaporation of GBP-L in the solid state

  10. Investigation of some possible changes in Am-Be neutron source configuration in order to increase the thermal neutron flux using Monte Carlo code

    Science.gov (United States)

    Basiri, H.; Tavakoli-Anbaran, H.

    2018-01-01

    Am-Be neutrons source is based on (α, n) reaction and generates neutrons in the energy range of 0-11 MeV. Since the thermal neutrons are widely used in different fields, in this work, we investigate how to improve the source configuration in order to increase the thermal flux. These suggested changes include a spherical moderator instead of common cylindrical geometry, a reflector layer and an appropriate materials selection in order to achieve the maximum thermal flux. All calculations were done by using MCNP1 Monte Carlo code. Our final results indicated that a spherical paraffin moderator, a layer of beryllium as a reflector can efficiently increase the thermal neutron flux of Am-Be source.

  11. An improvement of estimation method of source term to the environment for interfacing system LOCA for typical PWR using MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seok Jung; Kim, Tae Woon; Ahn, Kwang Il [Risk and Environmental Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-06-15

    Interfacing-system loss-of-coolant-accident (ISLOCA) has been identified as the most hazardous accident scenario in the typical PWR plants. The present study as an effort to improve the knowledge of the source term to the environment during ISLOCA focuses on an improvement of the estimation method. The improvement was performed to take into account an effect of broken pipeline and auxiliary building structures relevant to ISLOCA. An estimation of the source term to the environment was for the OPR-1000 plants by MELOCR code version 1.8.6. The key features of the source term showed that the massive amount of fission products departed from the beginning of core degradation to the vessel breach. The release amount of fission products may be affected by the broken pipeline and the auxiliary building structure associated with release pathway.

  12. A study on the application of CRUDTRAN code in primary systems of domestic pressurized heavy-water reactors for prediction of radiation source term

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Cho, Hoon Jo; Jung, Min Young; Lee, Sang Heon [Dept. of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2017-04-15

    The importance of developing a source-term assessment technology has been emphasized owing to the decommissioning of Kori nuclear power plant (NPP) Unit 1 and the increase of deteriorated NPPs. We analyzed the behavioral mechanism of corrosion products in the primary system of a pressurized heavy-water reactor-type NPP. In addition, to check the possibility of applying the CRUDTRAN code to a Canadian Deuterium Uranium Reactor (CANDU)-type NPP, the type was assessed using collected domestic onsite data. With the assessment results, it was possible to predict trends according to operating cycles. Values estimated using the code were similar to the measured values. The results of this study are expected to be used to manage the radiation exposures of operators in high-radiation areas and to predict decommissioning processes in the primary system.

  13. Calculations of fuel burn-up and radionuclide inventory in the syrian miniature neutron source reactor using the WIMSD4 code

    International Nuclear Information System (INIS)

    Khattab, K.

    2005-01-01

    Calculations of the fuel burn up and radionuclide inventory in the Miniature Neutron Source Reactor after 10 years (the reactor core expected life) of the reactor operating time are presented in this paper. The WIMSD4 code is used to generate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnt up and plutonium produced in the reactor core, the concentrations and radioactivities of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core are calculated using the WIMSD4 code as well

  14. European inter-comparison of Monte Carlo codes users for the uncertainty calculation of the kerma in air beside a caesium-137 source; Intercomparaison europeenne d'utilisateurs de codes monte carlo pour le calcul d'incertitudes sur le kerma dans l'air aupres d'une source de cesium-137

    Energy Technology Data Exchange (ETDEWEB)

    De Carlan, L.; Bordy, J.M.; Gouriou, J. [CEA Saclay, LIST, Laboratoire National Henri Becquerel, Laboratoire de Metrologie de la Dose 91 - Gif-sur-Yvette (France)

    2010-07-01

    Within the frame of the CONRAD European project (Coordination Network for Radiation Dosimetry), and more precisely within a work group paying attention to uncertainty assessment in computational dosimetry and aiming at comparing different approaches, the authors report the simulation of an irradiator containing a caesium 137 source to calculate the kerma in air as well as its uncertainty due to different parameters. They present the problem geometry, recall the studied issues (kerma uncertainty, influence of capsule source, influence of the collimator, influence of the air volume surrounding the source). They indicate the codes which have been used (MNCP, Fluka, Penelope, etc.) and discuss the obtained results for the first issue

  15. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    International Nuclear Information System (INIS)

    Liu, T; Lin, H; Xu, X; Su, L; Shi, C; Tang, X; Bednarz, B

    2016-01-01

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analytically derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.

  16. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Liu, T; Lin, H; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Su, L [John Hopkins University, Baltimore, MD (United States); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States); Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analytically derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.

  17. OFF, Open source Finite volume Fluid dynamics code: A free, high-order solver based on parallel, modular, object-oriented Fortran API

    Science.gov (United States)

    Zaghi, S.

    2014-07-01

    OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git

  18. Dosimetric comparison between the microSelectron HDR 192Ir v2 source and the BEBIG 60Co source for HDR brachytherapy using the EGSnrc Monte Carlo transport code

    International Nuclear Information System (INIS)

    Anwarul Islam, M.; Akramuzzaman, M.M.; Zakaria, G.A.

    2012-01-01

    Manufacturing of miniaturized high activity 192 Ir sources have been made a market preference in modern brachytherapy. The smaller dimensions of the sources are flexible for smaller diameter of the applicators and it is also suitable for interstitial implants. Presently, miniaturized 60 Co HDR sources have been made available with identical dimensions to those of 192 Ir sources. 60 Co sources have an advantage of longer half life while comparing with 192 Ir source. High dose rate brachytherapy sources with longer half life are logically pragmatic solution for developing country in economic point of view. This study is aimed to compare the TG-43U1 dosimetric parameters for new BEBIG 60 Co HDR and new microSelectron 192 Ir HDR sources. Dosimetric parameters are calculated using EGSnrc-based Monte Carlo simulation code accordance with the AAPM TG-43 formalism for microSelectron HDR 192 Ir v2 and new BEBIG 60 Co HDR sources. Air-kerma strength per unit source activity, calculated in dry air are 9.698x10 -8 ± 0.55% U Bq -1 and 3.039x10 -7 ± 0.41% U Bq -1 for the above mentioned two sources, respectively. The calculated dose rate constants per unit air-kerma strength in water medium are 1.116±0.12% cGy h -1 U -1 and 1.097±0.12% cGy h -1 U -1 , respectively, for the two sources. The values of radial dose function for distances up to 1 cm and more than 22 cm for BEBIG 60 Co HDR source are higher than that of other source. The anisotropic values are sharply increased to the longitudinal sides of the BEBIG 60 Co source and the rise is comparatively sharper than that of the other source. Tissue dependence of the absorbed dose has been investigated with vacuum phantom for breast, compact bone, blood, lung, thyroid, soft tissue, testis, and muscle. No significant variation is noted at 5 cm of radial distance in this regard while comparing the two sources except for lung tissues. The true dose rates are calculated with considering photon as well as electron transport using

  19. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    Science.gov (United States)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  20. Distinct roles of the DmNav and DSC1 channels in the action of DDT and pyrethroids.

    Science.gov (United States)

    Rinkevich, Frank D; Du, Yuzhe; Tolinski, Josh; Ueda, Atsushi; Wu, Chun-Fang; Zhorov, Boris S; Dong, Ke

    2015-03-01

    Voltage-gated sodium channels (Nav channels) are critical for electrical signaling in the nervous system and are the primary targets of the insecticides DDT and pyrethroids. In Drosophila melanogaster, besides the canonical Nav channel, Para (also called DmNav), there is a sodium channel-like cation channel called DSC1 (Drosophila sodium channel 1). Temperature-sensitive paralytic mutations in DmNav (para(ts)) confer resistance to DDT and pyrethroids, whereas DSC1 knockout flies exhibit enhanced sensitivity to pyrethroids. To further define the roles and interaction of DmNav and DSC1 channels in DDT and pyrethroid neurotoxicology, we generated a DmNav/DSC1 double mutant line by introducing a para(ts1) allele (carrying the I265N mutation) into a DSC1 knockout line. We confirmed that the I265N mutation reduced the sensitivity to two pyrethroids, permethrin and deltamethrin of a DmNav variant expressed in Xenopus oocytes. Computer modeling predicts that the I265N mutation confers pyrethroid resistance by allosterically altering the second pyrethroid receptor site on the DmNav channel. Furthermore, we found that I265N-mediated pyrethroid resistance in para(ts1) mutant flies was almost completely abolished in para(ts1);DSC1(-/-) double mutant flies. Unexpectedly, however, the DSC1 knockout flies were less sensitive to DDT, compared to the control flies (w(1118A)), and the para(ts1);DSC1(-/-) double mutant flies were even more resistant to DDT compared to the DSC1 knockout or para(ts1) mutant. Our findings revealed distinct roles of the DmNav and DSC1 channels in the neurotoxicology of DDT vs. pyrethroids and implicate the exciting possibility of using DSC1 channel blockers or modifiers in the management of pyrethroid resistance. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Utilisation de la DSC pour la caractérisation de la stabilité des émulsions eau dans pétrole Use of the Dsc Technique to Characterize Water-In-Crude Oil Emulsions Stability

    Directory of Open Access Journals (Sweden)

    Dalmazzone C.

    2006-12-01

    Full Text Available La technique DSC (Differential Scanning Calorimetry a été appliquée à l'étude des émulsions eau dans pétrole, qui se forment naturellement après un déversement de pétrole en mer. Ces émulsions, également appelées mousses au chocolat , peuvent contenir de 50 à 80% d'eau et se présentent souvent sous la forme d'un produit visqueux, difficile à récupérer mécaniquement, à traiter ou à brûler. Il est par conséquent important de pouvoir estimer leur stabilité pour optimiser le choix du traitement. Un grand nombre de techniques, généralement fondées sur l'analyse de la distribution de tailles de gouttes, peuvent être utilisées pour estimer la stabilité d'une émulsion. Malheureusement, la plupart ne sont pas adaptées à l'étude des émulsions eau dans huile opaques. La méthode la plus utilisée pour caractériser la stabilité de ce type d'émulsions est le bottle test. Elle consiste à mesurer la séparation de phases en fonction du temps. Ce test est la source d'une quantité d'informations appréciables quant à la stabilité de l'émulsion et à la qualité de la phase aqueuse séparée, mais il reste très empirique. La technique DSC est généralement utilisée pour déterminer la composition des émulsions eau dans huile, car elle permet de distinguer l'eau libre de l'eau émulsifiée. Cette étude a montré qu'il s'agit d'une technique très utile qui permet à la fois l'étude de l'évolution de la taille des gouttes dans l'émulsion, et une détermination précise de la quantité d'eau. The DSC technique (Differential Scanning Calorimetry was applied to the study of water-in-crude oil emulsions, which naturally form after an oil spill. The resulting emulsions contain between 50 and 80% seawater and they are often heavy materials, hard to recover mechanically, treat or burn. It is therefore important to assess their stability in order to optimize their treatments. A great variety of techniques are available for

  2. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    International Nuclear Information System (INIS)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I.

    2006-01-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR 2 + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six 60 Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  3. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    Energy Technology Data Exchange (ETDEWEB)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I. [Instituto Tecnologico e Nuclear, Dpto. Proteccao Radiologica e Seguranca Nuclear, Sacavem (Portugal)

    2006-07-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR{sup 2} + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six {sup 60}Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  4. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Science.gov (United States)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  5. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Directory of Open Access Journals (Sweden)

    Lin Hui

    2017-01-01

    Full Text Available Monte Carlo (MC simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  6. Parallelization of the AliRoot event reconstruction by performing a semi- automatic source-code transformation

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    side bus or processor interconnections. Parallelism can only result in performance gain, if the memory usage is optimized, memory locality improved and the communication between threads is minimized. But the domain of concurrent programming has become a field for highly skilled experts, as the implementation of multithreading is difficult, error prone and labor intensive. A full re-implementation for parallel execution of existing offline frameworks, like AliRoot in ALICE, is thus unaffordable. An alternative method, is to use a semi-automatic source-to-source transformation for getting a simple parallel design, with almost no interference between threads. This reduces the need of rewriting the develop...

  7. An elegant access to formation and vaporization enthalpies of ionic liquids by indirect DSC experiment and "in silico" calculations.

    Science.gov (United States)

    Verevkin, Sergey P; Zaitsau, Dzmitry H; Emel'yanenko, Vladimir N; Schick, Christoph; Jayaraman, Saivenkataraman; Maginn, Edward J

    2012-07-14

    We used DSC for determination of the reaction enthalpy of the synthesis of the ionic liquid [C(4)mim][Cl]. A combination of DSC and quantum chemical calculations presents a new, indirect way to study thermodynamics of ionic liquids. The new procedure was validated with two direct experimental measurements and MD simulations.

  8. A study of physics of sub-critical multiplicative systems driven by sources and the utilization of deterministic codes in calculation of this systems

    International Nuclear Information System (INIS)

    Antunes, Alberi

    2008-01-01

    This work presents the Physics of Source Driven Systems (ADS). It shows some statics and K i netics parameters of the reactor Physics and when it is sub critical, that are important in evaluation and definition of these systems. The objective is to demonstrate that there are differences in parameters when the reactor is critical. Moreover, the work shows the differences observed in the parameters for different calculation models. Two calculation methodologies are shown In this dissertation: Gandini and Salvatores and Dulla, and some parameters are calculated. The ANISN deterministic transport code is used in calculation in order to compare these parameters. In a subcritical configuration of IPEN-MB-01 Reactor driven by an external source some parameters are calculated. The conclusions about calculation realized are presented in end of work. (author)

  9. Calculation of gamma ray dose buildup factors in water for isotropic point, plane mono directional and line sources using MCNP code

    International Nuclear Information System (INIS)

    Atak, H.; Celikten, O. S.; Tombakoglu, M.

    2009-01-01

    Gamma ray dose buildup factors in water for isotropic point, plane mono directional and infinite/finite line sources were calculated using the MCNP code. The buildup factors are determined for gamma ray energies of 1, 2, 3 and 4 Mev and for shield thicknesses of 1, 2, 4 and 7 mean free paths. The calculated buildup factors were then fitted in the Taylor and Berger forms. For the line sources a buildup factor table was also constructed using the Sievert function and the constants in Taylor form derived in this study to compare with the Monte Carlo results. All buildup factors were compared with the tabulated data given in literature. In order to reduce the statistical errors on buildup factors, 'forced collision' option was used in the MCNP calculations.

  10. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    International Nuclear Information System (INIS)

    Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.

    2006-01-01

    The measurement of the stationarity of Monte Carlo fission source distributions in k eff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  11. A DSC study of zinc binding to bovine serum albumin (BSA

    Directory of Open Access Journals (Sweden)

    SANJA OSTOJIC

    2007-04-01

    Full Text Available The thermal denaturation of bovine serum albumin (BSA is a kinetically and thermodynamically controlled process. The effects of zinc binding to bovine serum albumin (BSA, followed by differential scanning calorimetry (DSC, were investigated in this work, with the purpose of obtaining a better understanding of the albumin/zinc interaction. From the DSC curves, the thermodynamic parameters of protein denaturation were obtained, i.e., the temperature of thermal transition maximum (Tm, calorimetric enthalpy (DHcal, van't Hoff enthalpy (DHvH, the number of binding sites (I, II, the binding constants for each binding site (KbI, KbII and the average number of ligands bound per mole of native protein XN. The thermodynamic data of protein unfolding showed that zinc binding to bovine serum albumin increases the stability of the protein (higher values of DHcal and the different ratio DHcal/DHvH indicates the perturbation of the protein during thermal denaturation.

  12. Melting temperature and enthalpy variations of phase change materials (PCMs): a differential scanning calorimetry (DSC) analysis

    Science.gov (United States)

    Sun, Xiaoqin; Lee, Kyoung Ok; Medina, Mario A.; Chu, Youhong; Li, Chuanchang

    2018-06-01

    Differential scanning calorimetry (DSC) analysis is a standard thermal analysis technique used to determine the phase transition temperature, enthalpy, heat of fusion, specific heat and activation energy of phase change materials (PCMs). To determine the appropriate heating rate and sample mass, various DSC measurements were carried out using two kinds of PCMs, namely N-octadecane paraffin and calcium chloride hexahydrate. The variations in phase transition temperature, enthalpy, heat of fusion, specific heat and activation energy were observed within applicable heating rates and sample masses. It was found that the phase transition temperature range increased with increasing heating rate and sample mass; while the heat of fusion varied without any established pattern. The specific heat decreased with the increase of heating rate and sample mass. For accuracy purpose, it is recommended that for PCMs with high thermal conductivity (e.g. hydrated salt) the focus will be on heating rate rather than sample mass.

  13. In-situ study of the thermal properties of hydrate slurry by high pressure DSC

    Energy Technology Data Exchange (ETDEWEB)

    Sari, O.; Hu, J.; Brun, F.; Erbeau, N. [Institute of Thermal Engineering, University of Applied Sciences of Western Switzerland, Yverdon-les-Bains (Switzerland); Homsy, P. [Nestec, Vevey (Switzerland); Logel, J.-C. [Axima Refrigeration, Bischheim (France)

    2008-07-01

    Knowing the enthalpy of hydrate slurry is very essential for energy balance and industrial applications. No direct measurement processes had been developed in this field in the past time. A new experimental method with special device has been developed to carry out on-line measurement of the thermal properties for hydrate slurry under dynamic conditions. With this special device, it is possible to deliver the hydrate slurry to the high pressure DSC (Differential Scanning Calorimetry) directly from the production tank or pipes. Thermal data acquisition will be performed afterwards by DSC. The investigated conditions were at pressure of 30 bar and temperature of {approx}+7 {sup o}C. The dissociation enthalpy of CO{sub 2} hydrate slurry was about 54 kJ/kg, corresponding 10.8% of solid fraction. The on-line measurement results for CO{sub 2} hydrate slurry give a good tendency to apply this phase change slurry to the industrial refrigeration process. (author)

  14. Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC

    Energy Technology Data Exchange (ETDEWEB)

    Van Buskirk, Caleb Griffith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-06-14

    Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore, in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.

  15. Thermal degradation of ligno-cellulosic fuels. DSC and TGA studies

    Energy Technology Data Exchange (ETDEWEB)

    Leroy, V.; Cancellieri, D.; Leoni, E. [SPE-CNRS UMR 6134, University of Corsica, Campus Grossetti, BP 52, 20250 Corti (France)

    2006-12-01

    The scope of this work was to show the utility of thermal analysis and calorimetric experiments to study the thermal oxidative degradation of Mediterranean scrubs. We investigated the thermal degradation of four species; DSC and TGA were used under air sweeping to record oxidative reactions in dynamic conditions. Heat released and mass loss are important data to be measured for wildland fires modelling purpose and fire hazard studies on ligno-cellulosic fuels. Around 638 and 778K, two dominating and overlapped exothermic peaks were recorded in DSC and individualized using a experimental and numerical separation. This stage allowed obtaining the enthalpy variation of each exothermic phenomenon. As an application, we propose to classify the fuels according to the heat released and the rate constant of each reaction. TGA experiments showed under air two successive mass loss around 638 and 778K. Both techniques are useful in order to measure ignitability, combustibility and sustainability of forest fuels. (author)

  16. Transmission from theory to practice: Experiences using open-source code development and a virtual short course to increase the adoption of new theoretical approaches

    Science.gov (United States)

    Harman, C. J.

    2015-12-01

    Even amongst the academic community, new theoretical tools can remain underutilized due to the investment of time and resources required to understand and implement them. This surely limits the frequency that new theory is rigorously tested against data by scientists outside the group that developed it, and limits the impact that new tools could have on the advancement of science. Reducing the barriers to adoption through online education and open-source code can bridge the gap between theory and data, forging new collaborations, and advancing science. A pilot venture aimed at increasing the adoption of a new theory of time-variable transit time distributions was begun in July 2015 as a collaboration between Johns Hopkins University and The Consortium of Universities for the Advancement of Hydrologic Science (CUAHSI). There were four main components to the venture: a public online seminar covering the theory, an open source code repository, a virtual short course designed to help participants apply the theory to their data, and an online forum to maintain discussion and build a community of users. 18 participants were selected for the non-public components based on their responses in an application, and were asked to fill out a course evaluation at the end of the short course, and again several months later. These evaluations, along with participation in the forum and on-going contact with the organizer suggest strengths and weaknesses in this combination of components to assist participants in adopting new tools.

  17. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources; SCRIC: un code pour calculer l'absorption et l'emission detaillees de plasmas hors equilibre, inhomogenes et etendus; application aux sources EUV a base de xenon

    Energy Technology Data Exchange (ETDEWEB)

    Gaufridy de Dortan, F. de

    2006-07-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  18. Recent advances and potential applications of modulated differential scanning calorimetry (mDSC) in drug development

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Löbmann, Korbinian; Elder, David P.

    2016-01-01

    Differential scanning calorimetry (DSC) is frequently the thermal analysis technique of choice within preformulation and formulation sciences because of its ability to provide detailed information about both the physical and energetic properties of a substance and/or formulation. However, convent......-dried formulations. However, as discussed in the present review, a number of other potential applications could also be relevant for the pharmaceutical scientist....

  19. Temperature-modulated DSC provides new insight about nickel-titanium wire transformations.

    Science.gov (United States)

    Brantley, William A; Iijima, Masahiro; Grentzer, Thomas H

    2003-10-01

    Differential scanning calorimetry (DSC) is a well-known method for investigating phase transformations in nickel-titanium orthodontic wires; the microstructural phases and phase transformations in these wires have central importance for their clinical performance. The purpose of this study was to use the more recently developed technique of temperature-modulated DSC (TMDSC) to gain insight into transformations in 3 nickel-titanium orthodontic wires: Neo Sentalloy (GAC International, Islandia, NY), 35 degrees C Copper Ni-Ti (Ormco, Glendora, Calif) and Nitinol SE (3M Unitek, Monrovia, Calif). In the oral environment, the first 2 superelastic wires have shape memory, and the third wire has superelastic behavior but not shape memory. All wires had cross-section dimensions of 0.016 x 0.022 in. Archwires in the as-received condition and after bending 135 degrees were cut into 5 or 6 segments for test specimens. TMDSC analyses (Model 2910 DSC, TA Instruments, Wilmington, Del) were conducted between -125 degrees C and 100 degrees C, using a linear heating and cooling rate of 2 degrees C per min, an oscillation amplitude of 0.318 degrees C with a period of 60 seconds, and helium as the purge gas. For all 3 wire alloys, strong low-temperature martensitic transformations, resolved on the nonreversing heat-flow curves, were not present on the reversing heat-flow curves, and bending appeared to increase the enthalpy change for these peaks in some cases. For Neo Sentalloy, TMDSC showed that transformation between martensitic and austenitic nickel-titanium, suggested as occurring directly in the forward and reverse directions by conventional DSC, was instead a 2-step process involving the R-phase. Two-step transformations in the forward and reverse directions were also found for 35 degrees C Copper Ni-Ti and Nitinol SE. The TMDSC results show that structural transformations in these wires are complex. Some possible clinical implications of these observations are discussed.

  20. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  1. Raman and DSC studies of fragility in tellurium-zinc oxide glass formers

    International Nuclear Information System (INIS)

    Stavrou, Elissaios; Kripotou, Sotiria; Raptis, Constantine; Turrell, Sylvia; Syassen, Karl

    2011-01-01

    Raman scattering and differential scanning calorimetry (DSC) measurements have been carried out in four mixed (TeO 2 ) 1-x (ZnO) x (x = 0.1, 0.2, 0.3, 0.4) glasses at high temperatures (Raman and DSC through the glass transition) and high pressures (Raman) with the aim of determining the fragility of these glass forming oxides. Four different criteria, corresponding to four parameters, were applied to assess the fragility of the glasses. From the DSC studies, we have obtained the fragility parameter m which corresponds to the slopes of Arrhenius (lnQ vs. 1/T g , were Q is the heating rate) plots, and the glass transition width ΔT g . Also, from the low-frequency Raman scattering, and in particular the boson peak intensity of the glasses at T g , we have estimated the fragility ratio r R (T g ) = I min /I max whose value serves as another (empirical) fragility criterion. Finally, from high pressure Raman measurements on the glasses, we have estimated the Grueneisen parameter γ T for each glass, which constitutes the fourth fragility parameter adopted in this work. Considering the four parameters ΔT g , m, r (T g ) and γ T and the generally accepted (empirical) fragility criteria, we conclude that the mixed tellurium-zinc oxides constitute strong-to-intermediate glass formers (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  2. LIQUID COAL CHARACTERISTIC ANALYSIS WITH FOURIER TRANSFORM INFRA RED (FTIR AND DIFFERENTIAL SCANNING CALORIMETER (DSC

    Directory of Open Access Journals (Sweden)

    ATUS BUKU

    2017-02-01

    Full Text Available The aim of this study is to identify the value of compounds contained in liquid coal by using Fourier Transform Infra-Red (FTIR and Differential Scanning Calorimeter (DSC. FTIR was used to analyse the components contained in liquid coal, while the DSC is done to observe the heat reaction to the environment. Based on the Fourier Transform Infra-Red (FTIR test results it is shown that the compound contained in the liquid Coal consisting of alkanes, alkenes and alkyne. These compounds are similar compounds. The alkanes, alkenes and alkynes compounds undergo complete combustion reaction with oxygen and would produce CO2 and water vapour [H2O (g]. If incomplete combustion occurs, the reaction proceeds in the form of Carbon Monoxide CO gas or solid carbon andH2O. Combustion reaction that occurs in all these three compounds also produces a number of considerable energy. And if it has higher value of Carbon then the boiling point would be higher. From the Differential Scanning Calorimetric (DSC test results obtained some of the factors that affect the reaction speed, which are the temperature, the reaction mixture composition, and pressure. Temperature has a profound influence in coal liquefaction, because if liquid coal heated with high pressure, the carbon chain would break down into smaller chains consisting of aromatic chain, hydro-aromatic, or aliphatic. This then triggers a reaction between oil formation and polymerization reactions to form solids (char.

  3. Contribution of modulated DSc to study the thermal behaviour of PET films drawn in hot water

    International Nuclear Information System (INIS)

    Zumalian, Abubaker

    2003-01-01

    PET films uni-axially drawn in hot water are studied by means of conventional DSc and modulated DSc. The glass transition is studied by modulated DSc which allows access to the values of the glass transition temperature T g and the variations of δ C p = C p 1-C p g (difference between thermal capacity in the liquid-like and glassy states at T = T g ). Variations of T g with the water content (which act as plasticizer) and with the drawing (which rigidifies the amorphous phase) are discussed in regard to the structure engaged in these materials. The variations of δ C p are also interpreted with the help of a three phase model and a strong-fragile glass former liquid concept. We show that the fragility of the medium increases by the conjugated effects of deformation and water as soon as a strain induced crystalline phase is obtained, and it decreases drastically when the rigid amorphous phase occurs. (author)

  4. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  5. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  6. The IPEM code of practice for determination of the reference air kerma rate for HDR 192Ir brachytherapy sources based on the NPL air kerma standard

    International Nuclear Information System (INIS)

    Bidmead, A M; Sander, T; Nutbrown, R F; Locks, S M; Lee, C D; Aird, E G A; Flynn, A

    2010-01-01

    This paper contains the recommendations of the high dose rate (HDR) brachytherapy working party of the UK Institute of Physics and Engineering in Medicine (IPEM). The recommendations consist of a Code of Practice (COP) for the UK for measuring the reference air kerma rate (RAKR) of HDR 192 Ir brachytherapy sources. In 2004, the National Physical Laboratory (NPL) commissioned a primary standard for the realization of RAKR of HDR 192 Ir brachytherapy sources. This has meant that it is now possible to calibrate ionization chambers directly traceable to an air kerma standard using an 192 Ir source (Sander and Nutbrown 2006 NPL Report DQL-RD 004 (Teddington: NPL) http://publications.npl.co.uk). In order to use the source specification in terms of either RAKR, .K R (ICRU 1985 ICRU Report No 38 (Washington, DC: ICRU); ICRU 1997 ICRU Report No 58 (Bethesda, MD: ICRU)), or air kerma strength, S K (Nath et al 1995 Med. Phys. 22 209-34), it has been necessary to develop algorithms that can calculate the dose at any point around brachytherapy sources within the patient tissues. The AAPM TG-43 protocol (Nath et al 1995 Med. Phys. 22 209-34) and the 2004 update TG-43U1 (Rivard et al 2004 Med. Phys. 31 633-74) have been developed more fully than any other protocol and are widely used in commercial treatment planning systems. Since the TG-43 formalism uses the quantity air kerma strength, whereas this COP uses RAKR, a unit conversion from RAKR to air kerma strength was included in the appendix to this COP. It is recommended that the measured RAKR determined with a calibrated well chamber traceable to the NPL 192 Ir primary standard is used in the treatment planning system. The measurement uncertainty in the source calibration based on the system described in this COP has been reduced considerably compared to other methods based on interpolation techniques.

  7. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  8. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    Energy Technology Data Exchange (ETDEWEB)

    Zehtabian, M; Zaker, N; Sina, S [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Meigooni, A Soleimani [Comprehensive Cancer Center of Nevada, Las Vegas, Nevada (United States)

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 which is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.

  9. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  10. Development of Level-2 PSA Technology: A Development of the Database of the Parametric Source Term for Kori Unit 1 Using the MAAP4 Code

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Chang Soon; Mun, Ju Hyun; Yun, Jeong Ick; Cho, Young Hoo; Kim, Chong Uk [Seoul National University, Seoul (Korea, Republic of)

    1997-07-15

    To quantify the severe accident source term of the parametric model method, the uncertainty of the parameters should be analyzed. Generally, to analyze the uncertainties, the cumulative distribution functions(CDF`S) of the parameters are derived. This report introduces a method of derivation of the CDF`s of the basic parameters, FCOR, FVES and FDCH. The calculation tool of the source term is the MAAP version 4.0. In the MAAP code, there are model parameters to consider an uncertain physical and/or chemical phenomenon. In general, the parameters have not a point value but a range. In this paper, considering this point, the input values of model parameters influencing each parameter are sampled using LHS. Then, the calculation results are shown in the cumulative distribution form. For a case study, the CDF`s of FCOR, FVES and FDCH of KORI unit 1 are derived. The target scenarios for the calculation are the ones whose initial events are large LOCA, small LOCA and transient, respectively. It is found that the distributions of this study are consistent to those of NUREG-1150 and are proven to be adequate in assessing the uncertainties in the severe accident source term of KORI Unit 1. 15 refs., 27 tabs., 4 figs. (author)

  11. Computer code determination of tolerable accel current and voltage limits during startup of an 80 kV MFTF sustaining neutral beam source

    International Nuclear Information System (INIS)

    Mayhall, D.J.; Eckard, R.D.

    1979-01-01

    We have used a Lawrence Livermore Laboratory (LLL) version of the WOLF ion source extractor design computer code to determine tolerable accel current and voltage limits during startup of a prototype 80 kV Mirror Fusion Test Facility (MFTF) sustaining neutral beam source. Arc current limits are also estimated. The source extractor has gaps of 0.236, 0.721, and 0.155 cm. The effective ion mass is 2.77 AMU. The measured optimum accel current density is 0.266 A/cm 2 . The gradient grid electrode runs at 5/6 V/sub a/ (accel voltage). The suppressor electrode voltage is zero for V/sub a/ < 3 kV and -3 kV for V/sub a/ greater than or equal to 3 kV. The accel current density for optimum beam divergence is obtained for 1 less than or equal to V/sub a/ less than or equal to 80 kV, as are the beam divergence and emittance

  12. CONTAIN code calculations of the effects on the source term of CsI to I/sub 2/ conversion due to severe hydrogen burns

    International Nuclear Information System (INIS)

    Valdez, G.D.; Williams, D.C.

    1986-01-01

    In experiments conducted at Sandia National Laboratories large amounts of elemental iodine were produced when CsI-Al 2 O 3 aerosol was exposed to hydrogen/air combustion. To evaluate some of the implications of the iodide conversion (observed to occur with up to 75% efficiency) for the severe accident source term, computational simulations of representative accident sequences were conducted with the CONTAIN code. The following conclusions can be drawn from this preliminary source term assessment: (1) If the containment sprays are inoperative during the accident, or failed by the hydrogen burn, the late-time source term is almost tripled when the iodide is converted to I 2 . (2) With the sprays active, the amount released without conversion of the CsI aerosol is 63% higher than for the case when conversion occurs. (3) For the case where CsI is converted to I 2 continued operation of the sprays reduces the release by a factor of 40, relative to the case in which the sprays fail at the time of the hydrogen burn. When there is no conversion, the reduction factor for continued spray operation is about a factor of 9, relative to the failed spray case

  13. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  14. Generation of point isotropic source dose buildup factor data for the PFBR special concretes in a form compatible for usage in point kernel computer code QAD-CGGP

    International Nuclear Information System (INIS)

    Radhakrishnan, G.

    2003-01-01

    Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted

  15. Calculations of the thermal and fast neutron fluxes in the Syrian miniature neutron source reactor using the MCNP-4C code.

    Science.gov (United States)

    Khattab, K; Sulieman, I

    2009-04-01

    The MCNP-4C code, based on the probabilistic approach, was used to model the 3D configuration of the core of the Syrian miniature neutron source reactor (MNSR). The continuous energy neutron cross sections from the ENDF/B-VI library were used to calculate the thermal and fast neutron fluxes in the inner and outer irradiation sites of MNSR. The thermal fluxes in the MNSR inner irradiation sites were also measured experimentally by the multiple foil activation method ((197)Au (n, gamma) (198)Au and (59)Co (n, gamma) (60)Co). The foils were irradiated simultaneously in each of the five MNSR inner irradiation sites to measure the thermal neutron flux and the epithermal index in each site. The calculated and measured results agree well.

  16. Hydrogen concentration determination in pressure tube samples using differential scanning calorimetry (dsc)

    International Nuclear Information System (INIS)

    Marinescu, R.; Mincu, M.

    2015-01-01

    Zirconium alloys are widely used as a structural material in nuclear reactors. It is known that zirconium based cladding alloys absorb hydrogen as a result of service in a pressurized water reactor. Hydrogen absorbed (during operation of the reactor) in the zirconium alloy, out of which the pressure tube is made, is one of the major factors determining the life time of the pressure tube. For monitoring the hydrides, samples of the pressure tube are periodically taken and analyzed. At normal reactor operating temperature, hydrogen has limited solubility in the zirconium lattice and precipitates out of solid solution as zirconium hydride when the solid solubility is exceeded. As a consequences material characterization of Zr-2.5Nb CANDU pressure tubes is required after manufacturing but also during the operation to assess its structural integrity and to predict its behavior until the next in-service inspection. Hydrogen and deuterium concentration determination is one of the most important parameters to be evaluated during the experimental tests. Hydrogen present in zirconium alloys has a strong effect of weakening. Following the zirconium-hydrogen reaction, the resulting zirconium hydride precipitates in the mass of material. Weakening of the material, due to the presence of 10 ppm of precipitated hydrogen significantly affects some of its properties. The concentration of hydrogen in a sample can be determined by several methods, one of them being the differential scanning calorimetry (DSC). The principle of the method consists in measuring the difference between the amount of heat required to raise the temperature of a sample and a reference to a certain value. The experiments were made using a TA Instruments DSC Q2000 calorimeter. This paper contains experimental work for hydrogen concentration determination by Differential Scanning Calorimetry (DSC) method. Also, the reproducibility and accuracy of the method used at INR Pitesti are presented. (authors)

  17. Space and Terrestrial Power System Integration Optimization Code BRMAPS for Gas Turbine Space Power Plants With Nuclear Reactor Heat Sources

    Science.gov (United States)

    Juhasz, Albert J.

    2007-01-01

    In view of the difficult times the US and global economies are experiencing today, funds for the development of advanced fission reactors nuclear power systems for space propulsion and planetary surface applications are currently not available. However, according to the Energy Policy Act of 2005 the U.S. needs to invest in developing fission reactor technology for ground based terrestrial power plants. Such plants would make a significant contribution toward drastic reduction of worldwide greenhouse gas emissions and associated global warming. To accomplish this goal the Next Generation Nuclear Plant Project (NGNP) has been established by DOE under the Generation IV Nuclear Systems Initiative. Idaho National Laboratory (INL) was designated as the lead in the development of VHTR (Very High Temperature Reactor) and HTGR (High Temperature Gas Reactor) technology to be integrated with MMW (multi-megawatt) helium gas turbine driven electric power AC generators. However, the advantages of transmitting power in high voltage DC form over large distances are also explored in the seminar lecture series. As an attractive alternate heat source the Liquid Fluoride Reactor (LFR), pioneered at ORNL (Oak Ridge National Laboratory) in the mid 1960's, would offer much higher energy yields than current nuclear plants by using an inherently safe energy conversion scheme based on the Thorium --> U233 fuel cycle and a fission process with a negative temperature coefficient of reactivity. The power plants are to be sized to meet electric power demand during peak periods and also for providing thermal energy for hydrogen (H2) production during "off peak" periods. This approach will both supply electric power by using environmentally clean nuclear heat which does not generate green house gases, and also provide a clean fuel H2 for the future, when, due to increased global demand and the decline in discovering new deposits, our supply of liquid fossil fuels will have been used up. This is

  18. DTA and DSC study on the effect of mechanical dispersion on poly(tetrafluorethylene properties

    Directory of Open Access Journals (Sweden)

    Dumitraşa Mihai

    2014-12-01

    Full Text Available Poly(tetrafluorethylene particles were obtained by mechanical processing of the formed polymer (Teflon bar. In order to assess the effect of mechanical wear on polymer properties, their melting and crystallization behaviour was investigated by DSC and DTA, and the results were compared to the ones obtained for the native polymer. An increase of the crystallinity degree and an accentuated decrease of the average molecular weight were found for the samples submitted to mechanical wear, as a result of mechanical degradation of the polymer

  19. Stochastic temperature modulation: A new technique in temperature-modulated DSC

    International Nuclear Information System (INIS)

    Schawe, J.E.K.; Huetter, T.; Heitz, C.; Alig, I.; Lellinger, D.

    2006-01-01

    A new temperature-modulated differential scanning calorimetry (TMDSC) technique is introduced. The technique is based on stochastic temperature modulation and has been developed as a consequence of a generalized theory of a temperature-modulated DSC. The quasi-static heat capacity and the frequency-dependent complex heat capacity can be determined over a wide frequency range in one single measurement without further calibration. Furthermore, the reversing and non-reversing heat flows are determined directly from the measured data. Examples show the frequency dependence of the glass transition, the isothermal curing of thermosets and a solid-solid transition

  20. STRUCTURAL AND MECHANICAL CHARACTERIZATION OF DEFORMED POLYMER USING CONFOCAL RAMAN MICROSCOPY AND DSC

    Directory of Open Access Journals (Sweden)

    Birgit Neitzel

    2016-02-01

    Full Text Available Polymers have various interesting properties, which depend largely on their inner structure. One way to influence the macroscopic behaviour is the deformation of the polymer chains, which effects the change in microstructure. For analyzing the microstructure of non-deformed and deformed polymer materials, Raman spectroscopy as well as differential scanning calorimetry (DSC were used. In the present study we compare the results for crystallinity measurements of deformed polymers using both methods in order to characterize the differences in micro-structure due to deformation. The study is ongoing, and we present the results of the first tests.

  1. Study of thermal transitions in polymers by a multifrequency modulated DSC technique

    OpenAIRE

    Fraga Rivas, Iria

    2010-01-01

    Premi extraordinari doctorat curs 2009-2010, àmbit de Ciències Differential Scanning Calorimetry (DSC) is one of the most widely used thermal analysis techniques for the study of transitions and relaxation processes in polymers and also in other materials. It measures the heat flow as a function of time and/or temperature, and determines the energy released or absorbed by a sample when it is heated (cooled) or maintained at a constant temperature. Its advantages are that it is fast a...

  2. Influência de alguns parâmetros experimentais nos resultados de análises calorimétricas diferenciais - DSC

    OpenAIRE

    Bernal, Cláudia; Couto, Andréa Boldarini; Breviglieri, Susete Trazzi; Cavalheiro, Éder Tadeu Gomes

    2002-01-01

    A series of experiments were performed in order to demonstrate to undergraduate students or users of the differential scanning calorimetry (DSC), that several factors can influence the qualitative and quantitative aspects of DSC results. Saccharin, an artificial sweetner, was used as a probe and its thermal behavior is also discussed on the basis of thermogravimetric (TG) and DSC curves.

  3. Influência de alguns parâmetros experimentais nos resultados de análises calorimétricas diferenciais - DSC

    Directory of Open Access Journals (Sweden)

    Bernal Cláudia

    2002-01-01

    Full Text Available A series of experiments were performed in order to demonstrate to undergraduate students or users of the differential scanning calorimetry (DSC, that several factors can influence the qualitative and quantitative aspects of DSC results. Saccharin, an artificial sweetner, was used as a probe and its thermal behavior is also discussed on the basis of thermogravimetric (TG and DSC curves.

  4. Thermal stability of the DSC ruthenium dye C106 in robust electrolytes

    DEFF Research Database (Denmark)

    Lund, Torben; Phuong, Nguyen Tuyet; Pechy, Peter

    2014-01-01

    We have investigated the thermal stability of the heteroleptic ruthenium complex C106 employed as a sensitizer in dye-sensitized solar cells. The C106 was adsorbed on TiO2 particles and exposed to 2 different iodide/triidode based redox electrolytes A and B at 80 °C for up to 1500 h in sealed glass......) substitution products 3 and 4 formed by replacement of the thiocyanate ligand by NBB after 1500 h of heating at 80 °C. Samples prepared under ambient conditions gave a steady state C106 concentration of 60% of the initial value and 40% substitution products. The C106 degradation was found to be independent...... of the degree of dye loading of the TiO2 particles and the ratio between the amount of dyed TiO2 particles and electrolyte volume. Assuming that this substitution is the predominant loss mechanism in a DSC during thermal stress, we estimate the reduction in the DSC efficiency after long term heat to be 12...

  5. Vapor pressure data for fatty acids obtained using an adaptation of the DSC technique

    Energy Technology Data Exchange (ETDEWEB)

    Matricarde Falleiro, Rafael M. [LPT, Departamento de Processos Quimicos (DPQ), Faculdade de Engenharia Quimica, Universidade de Campinas (UNICAMP), 13083-852 Campinas - SP (Brazil); Akisawa Silva, Luciana Y. [Departamento de Ciencias Exatas e da Terra, Universidade Federal de Sao Paulo (UNIFESP), 09972-270 Diadema - SP (Brazil); Meirelles, Antonio J.A. [EXTRAE, Departamento de Engenharia de Alimentos (DEA), Faculdade de Engenharia de Alimentos, Universidade de Campinas (UNICAMP), 13083-862 Campinas - SP (Brazil); Kraehenbuehl, Maria A., E-mail: mak@feq.unicamp.br [LPT, Departamento de Processos Quimicos (DPQ), Faculdade de Engenharia Quimica, Universidade de Campinas (UNICAMP), 13083-852 Campinas - SP (Brazil)

    2012-11-10

    Highlights: Black-Right-Pointing-Pointer Vapor pressure data of fatty acids were measured by Differential Scanning Calorimetry. Black-Right-Pointing-Pointer The DSC technique is especially advantageous for expensive chemicals. Black-Right-Pointing-Pointer High heating rate was used for measuring the vapor pressure data. Black-Right-Pointing-Pointer Antoine constants were obtained for the selected fatty acids. - Abstract: The vapor pressure data for lauric (C{sub 12:0}), myristic (C{sub 14:0}), palmitic (C{sub 16:0}), stearic (C{sub 18:0}) and oleic (C{sub 18:1}) acids were obtained using Differential Scanning Calorimetry (DSC). The adjustments made in the experimental procedure included the use of a small sphere (tungsten carbide) placed over the pinhole of the crucible (diameter of 0.8 mm), making it possible to use a faster heating rate than that of the standard method and reducing the experimental time. The measurements were made in the pressure range from 1333 to 9333 Pa, using small sample quantities of fatty acids (3-5 mg) at a heating rate of 25 K min{sup -1}. The results showed the effectiveness of the technique under study, as evidenced by the low temperature deviations in relation to the data reported in the literature. The Antoine constants were fitted to the experimental data whose values are shown in Table 5.

  6. DSC analyses of static and dynamic precipitation of an Al–Mg–Si–Cu aluminum alloy

    Directory of Open Access Journals (Sweden)

    Manping Liu

    2015-04-01

    Full Text Available In the present investigation, both static and dynamic precipitations of an Al–Mg–Si–Cu aluminum alloy after solid-solution treatment (SST were comparatively analyzed using differential scanning calorimetry (DSC. Dynamic aging was performed in the SST alloy through equal channel angular pressing (ECAP at different temperatures of room temperature, 110, 170, 191 and 300 °C. For comparison, static artificial aging was conducted in the SST alloy at 191 °C with two aging times of 4 and 10 h. The DSC analyses reveal that the dynamic precipitation has occurred in the ECAPed samples, while the activation energies associated with the strengthening precipitates in the dynamic samples are considerably higher than the energies in the SST and static aged samples. The higher activation energies are probably attributed to the smaller grains and higher dislocation density developed after ECAP. The results in the present investigation allow the prediction of the type of the dynamic precipitates to influence the strength of the ultrafine grained alloy during ECAP at various temperatures.

  7. Vapor pressure data for fatty acids obtained using an adaptation of the DSC technique

    International Nuclear Information System (INIS)

    Matricarde Falleiro, Rafael M.; Akisawa Silva, Luciana Y.; Meirelles, Antonio J.A.; Krähenbühl, Maria A.

    2012-01-01

    Highlights: ► Vapor pressure data of fatty acids were measured by Differential Scanning Calorimetry. ► The DSC technique is especially advantageous for expensive chemicals. ► High heating rate was used for measuring the vapor pressure data. ► Antoine constants were obtained for the selected fatty acids. - Abstract: The vapor pressure data for lauric (C 12:0 ), myristic (C 14:0 ), palmitic (C 16:0 ), stearic (C 18:0 ) and oleic (C 18:1 ) acids were obtained using Differential Scanning Calorimetry (DSC). The adjustments made in the experimental procedure included the use of a small sphere (tungsten carbide) placed over the pinhole of the crucible (diameter of 0.8 mm), making it possible to use a faster heating rate than that of the standard method and reducing the experimental time. The measurements were made in the pressure range from 1333 to 9333 Pa, using small sample quantities of fatty acids (3–5 mg) at a heating rate of 25 K min −1 . The results showed the effectiveness of the technique under study, as evidenced by the low temperature deviations in relation to the data reported in the literature. The Antoine constants were fitted to the experimental data whose values are shown in Table 5.

  8. DSC and TMA studies on freezing and thawing gelation of galactomannan polysaccharide

    International Nuclear Information System (INIS)

    Iijima, Mika; Hatakeyama, Tatsuko; Hatakeyama, Hyoe

    2012-01-01

    Research highlights: ► Locust bean gum forms hydrogels by freezing and thawing. ► Syneresis was observed when freezing and thawing cycle (n) increased. ► Dynamic Young's modulus increased with increasing n. ► Non-freezing water content restrained by hydrogels decreased with increasing n. ► Strong gel with densely packed network structure formed with increasing n. - Abstract: Among various kinds of polysaccharides known to form hydrogels, locust bean gum (LBG) consisting of a mannose backbone and galactose side chains has unique characteristics, since LBG forms hydrogels by freezing and thawing. In this study, effect of thermal history on gelation was investigated by differential scanning calorimetry (DSC) and thermomechanical analysis (TMA). Gel/sol ratio calculated by weighing method was found to be affected by sol concentration, freezing rate and the number of freezing and thawing cycle (n). Once LBG hydrogels are formed, they are thermally stable, although syneresis was observed when n increased. Dynamic Young's modulus (E′) of hydrogels measured by TMA in water increased with increasing n and decreasing freezing rate. Non-freezing water calculated from DSC melting peak of ice in the gel decreased with increasing n and decreasing freezing rate. Morphological observation of freeze-dried gels was carried out by scanning electron microscopy (SEM). The above results indicate that weak hydrogel having large molecular network structure transformed into strong gel with densely packed network structure by increasing n and decreasing freezing rate.

  9. Hydriding properties of amorphous Ni-B alloy studied by DSC and thermogravimetry

    International Nuclear Information System (INIS)

    Spassov, T.; Rangelova, V.

    1999-01-01

    The hydrogenation behaviour of melt-spun Ni 81.5 B 18.5 amorphous alloy was studied by means of differential scanning calorimetry (DSC) and thermogravimetry (TG) and compared with the hydriding properties of a Fe-B-Si glass. It was found that the amorphous Ni-B alloy absorbs larger amounts of hydrogen than the Fe-B-Si glass, as the initial kinetics of hydrogen absorption and desorption of both the alloys are comparable. Hydrogen absorption and desorption reactions in Ni-B were observed to proceed with similar rates at ca. 300 K. The hydrogen desorption is revealed in DSC as an endothermic peak in the 350-450 K range, preceding the crystallization peak of the amorphous alloy. The enthalpy of hydrogen desorption (ΔH des =22 kJ/mol H 2 ) for Ni-B was found to be smaller than that for the Fe-B-Si glass, which finding is in contrast to the results on hydrogen diffusion in crystalline αFe and Fe-based alloys and Ni and Ni-based alloys. The hydrogen desorption temperature and enthalpy for Ni 81.5 B 18.5 were found to be independent of the amount of hydrogen absorbed. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  10. Comparison between DSC and TMDSC in the investigation into frozen aqueous cryoprotectants solutions.

    Science.gov (United States)

    Santoveña, A; Piñero, M J; Llabrés, M

    2010-12-01

    The influence of thermal parameters in the observation of thermal events and in the calculation of heat transformation in aqueous cryoprotectant solutions after freezing was investigated using conventional differential scanning calorimetry (DSC) and temperature-modulated DSC (TMDSC), respectively. The systems under study were formed by pure water and diluted aqueous solutions of mannitol, trehalose, sucrose, sorbitol, and glycine. The influence of different combinations of frequency and amplitude was analyzed in heating-cooling and heating-iso TMDSC scans. Trehalose, sucrose, and sorbitol present a lesser critical temperature of primary drying than other cryoprotectants studied. The calorimetric variables selection is crucial to detect or not the thermal events, or to detect so with different numerical values. Then, the values of the calorimetric parameters determined are different if measured in a mode of heating-cooling or heating-iso. The TMDSC method-1 used in this study employs a higher number of cycles in each thermal event. The use of Lissajous figures and the study of the C(p in-phase) signal evolution will allow us to understand the complexity of the events detected. The comparative study of both techniques points to the selection of conventional or modulated technique depending on the type of system and the nature of the studied events.

  11. The non-isothermal DSC kinetics of polyethylene tereftalate–epoxy compatible blends

    International Nuclear Information System (INIS)

    Zvetkov, V.L.; Djoumaliisky, S.; Simeonova-Ivanova, E.

    2013-01-01

    Highlights: ► The non-isothermal DSC kinetics of the reaction of DGEBA with DDS, in particular in the presence of phase separating PET, has been studied. ► The specific features in the kinetics of PET formulations in comparison to the pure system have been discussed. ► The fast pre-curing of the epoxy phase allows supposing sub-micro phase separation of PET and efficient toughening of the epoxy matrix. - Abstract: Polyethylene tereftalate has been dissolved in an epoxy resin based on diglycidyl ether of bisphenol-A, DGEBA, and the epoxy component has been cross-linked with the aid of two diamine hardeners. Two series of samples have been tested at the epoxy-amine stoichiometry applying the differential scanning calorimetry, DSC, in scanning mode. One of the series of samples was pre-cured at low temperatures with the aid of an aliphatic diamine hardener near the gel point and post-cured with diaminodiphenyl sulfone, DDS. The other series of samples contained the higher temperature hardener only. Consequently, the experimental data obtained in this study on both systems relate to the non-isothermal curing of DGEBA with DDS. The kinetics has been estimated applying preferably isoconversional (model free) methods. It has been established that the fast pre-curing allows performing a sub-micro phase separation and efficient toughening of the epoxy matrix

  12. The non-isothermal DSC kinetics of polyethylene tereftalate–epoxy compatible blends

    Energy Technology Data Exchange (ETDEWEB)

    Zvetkov, V.L., E-mail: zvetval@yahoo.com [Institute of Mechanics, Bulgarian Academy of Sciences, bl. I, Sofia 1113 (Bulgaria); Djoumaliisky, S.; Simeonova-Ivanova, E. [Institute of Mechanics, Bulgarian Academy of Sciences, bl. I, Sofia 1113 (Bulgaria)

    2013-02-10

    Highlights: ► The non-isothermal DSC kinetics of the reaction of DGEBA with DDS, in particular in the presence of phase separating PET, has been studied. ► The specific features in the kinetics of PET formulations in comparison to the pure system have been discussed. ► The fast pre-curing of the epoxy phase allows supposing sub-micro phase separation of PET and efficient toughening of the epoxy matrix. - Abstract: Polyethylene tereftalate has been dissolved in an epoxy resin based on diglycidyl ether of bisphenol-A, DGEBA, and the epoxy component has been cross-linked with the aid of two diamine hardeners. Two series of samples have been tested at the epoxy-amine stoichiometry applying the differential scanning calorimetry, DSC, in scanning mode. One of the series of samples was pre-cured at low temperatures with the aid of an aliphatic diamine hardener near the gel point and post-cured with diaminodiphenyl sulfone, DDS. The other series of samples contained the higher temperature hardener only. Consequently, the experimental data obtained in this study on both systems relate to the non-isothermal curing of DGEBA with DDS. The kinetics has been estimated applying preferably isoconversional (model free) methods. It has been established that the fast pre-curing allows performing a sub-micro phase separation and efficient toughening of the epoxy matrix.

  13. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    Science.gov (United States)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of

  14. LIGHT CURVES OF CORE-COLLAPSE SUPERNOVAE WITH SUBSTANTIAL MASS LOSS USING THE NEW OPEN-SOURCE SUPERNOVA EXPLOSION CODE (SNEC)

    International Nuclear Information System (INIS)

    Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.; Clausen, Drew; Couch, Sean M.; Ellis, Justin; Roberts, Luke F.; Piro, Anthony L.

    2015-01-01

    We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M ⊙ (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. The resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M ⊙ of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M ⊙ , hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R ⊙

  15. LIGHT CURVES OF CORE-COLLAPSE SUPERNOVAE WITH SUBSTANTIAL MASS LOSS USING THE NEW OPEN-SOURCE SUPERNOVA EXPLOSION CODE (SNEC)

    Energy Technology Data Exchange (ETDEWEB)

    Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.; Clausen, Drew; Couch, Sean M.; Ellis, Justin; Roberts, Luke F. [TAPIR, Walter Burke Institute for Theoretical Physics, MC 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Piro, Anthony L., E-mail: morozvs@tapir.caltech.edu [Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101 (United States)

    2015-11-20

    We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M{sub ⊙} (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. The resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M{sub ⊙} of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M{sub ⊙}, hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R{sub ⊙}.

  16. Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining

    NARCIS (Netherlands)

    Zaidman, A.; Van Rompaey, B.; Van Deursen, A.; Demeyer, S.

    2010-01-01

    Many software production processes advocate rigorous development testing alongside functional code writing, which implies that both test code and production code should co-evolve. To gain insight in the nature of this co-evolution, this paper proposes three views (realized by a tool called TeMo)

  17. Source Code Analysis Laboratory (SCALe)

    Science.gov (United States)

    2012-04-01

    products (including services) and processes. The agency has also published ISO / IEC 17025 :2005 General Requirements for the Competence of Testing...SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 also operate in accordance with ISO 9001. • NIST National...assessed by the accreditation body against all of the requirements of ISO / IEC 17025 : 2005 General requirements for the competence of testing and

  18. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  19. Multiple-specimen absolute paleointensity determination with the MSP-DSC protocol: Advantages and drawbacks.

    Science.gov (United States)

    Camps, P.; Fanjat, G.; Poidras, T.; Carvallo, C.; Nicol, P.

    2012-04-01

    The MSP-DSC protocol (Dekkers & Bohnel, 2006, EPSL; Fabian & Leonhardt, 2010, EPSL) is a recent development in the methodology for documenting the intensity of the ancient Earth magnetic field. Applicable both on rocks or archaeological artifacts it allows us to use samples that until now were not measured because their magnetic properties do not meet selection criteria required by conventional methods. However, this new experimental protocol requires that samples be heated and cooled under a field parallel to its natural remanent magnetization (NRM). Currently, standard paleointensity furnaces do not match precisely this constraint. Yet, such new measurement protocol seems very promising since it would possibly double the number of available data. We are developing in Montpellier (France), a very fast-heating oven with infrared dedicated to this protocol. Two key points determine its characteristics. The first is to heat uniformly a rock sample of a 10-cc-standard volume as fast as possible. The second is to apply to the sample during the heating (and the cooling) a precise magnetic induction field, perfectly controlled in 3D. We tested and calibrated a preliminary version of this oven along with the MSP-DSC protocol with 3 historical lava flows, 2 from Reunion Island (erupted in 2002 and 2007) and one from Etna (erupted in 1983). These lava flows were selected because they have different magnetic behaviors. Reunion 2002 is rather SD-PSD-like, while Reunion 2007 is PSD-MD-like, and Etna 1983 is MD-like. The paleointensity determinations obtained with the original protocol of Dekkers and Bohnel (2006, EPSL) are within +- 1 μT of the known field for the three lava flows. The same precision is obtained when we applied the fraction correction (MSP-FC protocol). However, we systematically observed a loss in the linearity of the MSP-FC plots. In addition, like Muxworthy and Taylor (2011, GJI), we found that the Domain State Correction is difficult to apply since alpha

  20. Improvement in Performance of ZnO based DSC Prepared by Spraying Method

    Directory of Open Access Journals (Sweden)

    Rangga Winantyo

    2013-09-01

    Full Text Available This paper reports the effect of TiCl4 on the performance of ZnO based DSC. ZnO was used due to its stability against photo-corrosion  and  photochemical  properties  similar  to  TiO2.  Thin  films  of  nanocrystalline  ZnO  were  deposited  on transparent conducting oxide glass using spray  method. The ZnO  films  were treated using TiCl4. The cell’s efficiency was found to be 2.5% with TiCl4 post-treatment and 1.9% without TiCl4 post-treatment.

  1. DSC and curing kinetics study of epoxy grouting diluted with furfural -acetone slurry

    Science.gov (United States)

    Yin, H.; Sun, D. W.; Li, B.; Liu, Y. T.; Ran, Q. P.; Liu, J. P.

    2016-07-01

    The use of furfural-acetone slurry as active diluents of Bisphenol-A epoxy resin (DGEBA) groutings has been studied by dynamic and non-isothermal DSC for the first time. Curing kinetics study was investigated by non-isothermal differential scanning calorimetries at different heating rates. Activation enery (Ea) was calculated based on Kissinger and Ozawa Methods, and the results showed that Ea increased from 58.87 to 71.13KJ/mol after the diluents were added. The furfural-acetone epoxy matrix could cure completely at the theoretical curing temperature of 365.8K and the curing time of 139mins, which were determined by the kinetic model parameters.

  2. Synthesis, characterization and TG-DSC study of cadmium halides adducts with caffeine

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Robson F. de; Silva, Ademir O. da; Silva, Umberto G. da

    2003-11-28

    The synthesis, characterization and TG-DSC study of the compounds CdX{sub 2}{center_dot}ncaff, for which X: Cl, Br and I; n=1 and 2 and caff: caffeine is reported. It is verified that caffeine is coordinated through more than one coordination site, despite the fact that the nitrogen of the imidazole ring is the main coordination site. The following thermal stability trend is observed: Cl>Br>I and monoadducts are more stable than bisadducts. The thermal degradation (td) enthalpies have the values (kJ mol{sup -1}): 58.2 and 71.5; 74.9 and 91.4; 31.1 and 47.5 for Cl, Br and I mono and bisadducts, respectively.

  3. Evaluation of the interaction of surfactants with stratum corneum model membrane from Bothrops jararaca by DSC.

    Science.gov (United States)

    Baby, André Rolim; Lacerda, Aurea Cristina Lemos; Velasco, Maria Valéria Robles; Lopes, Patrícia Santos; Kawano, Yoshio; Kaneko, Telma Mary

    2006-07-06

    The interaction of surfactants sodium dodecyl sulfate (SDS), cetyl trimethyl ammonium chloride (CTAC) and lauryl alcohol ethoxylated (12 mol ethylene oxide) (LAE-12OE) was evaluated on the stratum corneum (SC) of shed snake skins from Bothrops jararaca, used as model membrane, and thermal characterized by differential scanning calorimetry (DSC). Surfactant solutions were employed above of the critical micellar concentration (CMC) with treatment time of 8h. The SDS interaction with the SC model membrane has increased the characteristic transition temperature of 130 degrees C in approximately 10 degrees C for the water loss and keratin denaturation, indicating an augmentation of the water content. Samples treated with CTAC have a decrease of the water loss temperature, while, for the LAE-12OE treated samples, changes on the transition temperature have not been observed.

  4. Complex Heat Capacity of Lithium Borate Glasses Studied by Modulated DSC

    Science.gov (United States)

    Matsuda, Yu; Matsui, Chihiro; Ike, Yuji; Kodama, Masao; Kojima, Seiji

    2006-05-01

    Complex heat capacity, Cp* = Cp' - iCp″, of lithium borate glasses Li2Oṡ(1-x)B2O3 (x = 0.00 - 0.33) has been investigated by Modulated DSC (MDSC). We have successfully observed the frequency dependent Cp* by MDSC in the frequency range 0.01 to 0.1 Hz, and the average relaxation time of glass transition has been determined as a function of temperature. Moreover, the composition dependence of the thermal properties has been investigated. The calorimetric glass transition temperatures become higher with the increase of concentration of Li2O and show the board maximum around x = 0.26-0.28. The width of glass transition region becomes narrower as Li2O increases. These results relate to the change of the fragility of the system. It has been proven that the complex heat capacity spectroscopy by MDSC is a powerful tool to investigate the glass transition phenomena.

  5. The Study of Phase Transformations of AlSi9Cu3 Alloy by DSC Method

    Directory of Open Access Journals (Sweden)

    Piątkowski J.

    2016-12-01

    Full Text Available With the use of differential scanning calorimetry (DSC, the characteristic temperatures and enthalpy of phase transformations were defined for commercial AlSi9Cu3 cast alloy (EN AC-46000 that is being used for example for pressurized castings for automotive industry. During the heating with the speed of 10°C·min−1 two endothermic effects has been observed. The first appears at the temperature between 495 °C and 534 °C, and the other between 555 °C and 631 °C. With these reactions the phase transformation enthalpy comes up as +6 J g−1 and +327 J g−1. During the cooling with the same speed, three endothermic reactions were observed at the temperatures between 584 °C and 471 °C. The total enthalpy of the transitions is – 348 J g−1.

  6. Study of the source-detector system geometry using the MCNP-X code in the flowrate measurement with radioactive tracers

    Energy Technology Data Exchange (ETDEWEB)

    Avilan Puertas, Eddie, E-mail: epuertas@nuclear.ufrj.br [Universidad Central de Venezuela (UCV), Facultad de Ingenieria, Departamento de Fisica Aplicada, Caracas (Venezuela, Bolivarian Republic of); Braz, Delson, E-mail: delson@lin.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Brandao, Luis E.; Salgado, Cesar M., E-mail: brandao@ien.gov.br, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2015-07-01

    The use radioactive tracers for flow rate measurement is applied to a great variety of situations, however the accuracy of the technique is highly dependent of the adequate choice of the experimental measurement conditions. To measure flow rate of fluids in ducts partially filled, is necessary to measure the fluid flow velocity and the fluid height. The flow velocity can be measured with the cross correlation function and the fluid level, with a fluid level meter system. One of the error factors when measuring flow rate, is on the correct setting of the source-detector of the fluid level meter system. The goal of the present work is to establish by mean of MCNP-X code simulations the experimental parameters to measure the fluid level. The experimental tests will be realized in a flow rate system of 10 mm of diameter of acrylic tube for water and oil as fluids. The radioactive tracer to be used is the {sup 82}Br and for the detection will be employed two 1″ NaI(Tl) scintillator detectors, shielded with collimators of 0.5 cm and 1 cm of circular aperture diameter. (author)

  7. Neutron and photon measurements through concrete from a 15 GeV electron beam on a target-comparison with models and calculations. [Intermediate energy source term, Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins, T M [Stanford Linear Accelerator Center, CA (USA)

    1979-02-15

    Measurements of neutron and photon dose equivalents from a 15 GeV electron beam striking an iron target inside a scale model of a PEP IR hall are described, and compared with analytic-empirical calculations and with the Monte Carlo code, MORSE. The MORSE code is able to predict both absolute neutron and photon dose equivalents for geometries where the shield is relatively thin, but fails as the shield thickness is increased. An intermediate energy source term is postulated for analytic-empirical neutron shielding calculations to go along with the giant resonance and high energy terms, and a new source term due to neutron capture is postulated for analytic-empirical photon shielding calculations. The source strengths for each energy source term, and each type, are given from analysis of the measurements.

  8. Estimation of water-coal surface interaction during heat treatment of coal by use of FTir and DSC; FTir to DSC wo mochiita sekitan-mizu kan sogo sayo no teiryoteki hyoka

    Energy Technology Data Exchange (ETDEWEB)

    Miura, K.; Mae, K.; Morozumi, F.; Kusakawa, T. [Kyoto University, Kyoto (Japan)

    1997-10-30

    The authors have recently presented a method to estimate the strength distribution of hydrogen bondings in coal using FTir and DSC. The method was applied to estimate the strength of coal-water interaction in two different coals and to estimate the enthalpy change deriving from the change in hydrogen bondings during the desorption of water. The estimated enthalpy change was compared with the total enthalpy change estimated by DSC measurement to examine the importance of hydrogen bondings during the desertion of water. 1 ref., 6 figs.

  9. X-ray and DSC studies on the melt-recrystallization process of poly(butylene naphthalate)

    International Nuclear Information System (INIS)

    Yasuniwa, Munehisa; Tsubakihara, Shinsuke; Fujioka, Takashi

    2003-01-01

    Melt-recrystallization in the heating process of poly(butylene naphthalate) (PBN) was studied with X-ray analysis and differential scanning calorimetry (DSC). DSC melting curve of an isothermally crystallized sample showed double endothermic peaks. With increasing the temperature, wide-angle X-ray diffraction (WAXD) patterns of the sample were obtained successively. Crystal structure did not change during the double melting process. The X-ray diffraction intensity decreased gradually in the temperature region up to about 200 deg. C, and then increased distinctly before steep decrease due to the final melting. This increase is interpreted as a proof of recrystallization. The temperature derivative curve of the diffraction intensity was similar to the DSC melting curve

  10. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  11. HELIOS–RETRIEVAL: An Open-source, Nested Sampling Atmospheric Retrieval Code; Application to the HR 8799 Exoplanets and Inferred Constraints for Planet Formation

    Energy Technology Data Exchange (ETDEWEB)

    Lavie, Baptiste; Mendonça, João M.; Malik, Matej; Demory, Brice-Olivier; Grimm, Simon L. [University of Bern, Space Research and Planetary Sciences, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Mordasini, Christoph; Oreshenko, Maria; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Bonnefoy, Mickaël [Université Grenoble Alpes, IPAG, F-38000, Grenoble (France); Ehrenreich, David, E-mail: baptiste.lavie@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch [Observatoire de l’Université de Genève, 51 chemin des Maillettes, 1290, Sauverny (Switzerland)

    2017-09-01

    We present an open-source retrieval code named HELIOS–RETRIEVAL, designed to obtain chemical abundances and temperature–pressure profiles by inverting the measured spectra of exoplanetary atmospheres. In our forward model, we use an exact solution of the radiative transfer equation, in the pure absorption limit, which allows us to analytically integrate over all of the outgoing rays. Two chemistry models are considered: unconstrained chemistry and equilibrium chemistry (enforced via analytical formulae). The nested sampling algorithm allows us to formally implement Occam’s Razor based on a comparison of the Bayesian evidence between models. We perform a retrieval analysis on the measured spectra of the four HR 8799 directly imaged exoplanets. Chemical equilibrium is disfavored for HR 8799b and c. We find supersolar C/H and O/H values for the outer HR 8799b and c exoplanets, while the inner HR 8799d and e exoplanets have a range of C/H and O/H values. The C/O values range from being superstellar for HR 8799b to being consistent with stellar for HR 8799c and being substellar for HR 8799d and e. If these retrieved properties are representative of the bulk compositions of the exoplanets, then they are inconsistent with formation via gravitational instability (without late-time accretion) and consistent with a core accretion scenario in which late-time accretion of ices occurred differently for the inner and outer exoplanets. For HR 8799e, we find that spectroscopy in the K band is crucial for constraining C/O and C/H. HELIOS–RETRIEVAL is publicly available as part of the Exoclimes Simulation Platform (http://www.exoclime.org).

  12. Simulation of equivalent dose due to accidental electron beam loss in Indus-1 and Indus-2 synchrotron radiation sources using FLUKA code

    International Nuclear Information System (INIS)

    Sahani, P.K.; Dev, Vipin; Singh, Gurnam; Haridas, G.; Thakkar, K.K.; Sarkar, P.K.; Sharma, D.N.

    2008-01-01

    Indus-1 and Indus-2 are two Synchrotron radiation sources at Raja Ramanna Centre for Advanced Technology (RRCAT), India. Stored electron energy in Indus-1 and Indus-2 are 450MeV and 2.5GeV respectively. During operation of storage ring, accidental electron beam loss may occur in addition to normal beam losses. The Bremsstrahlung radiation produced due to the beam losses creates a major radiation hazard in these high energy electron accelerators. FLUKA, the Monte Carlo radiation transport code is used to simulate the accidental beam loss. The simulation was carried out to estimate the equivalent dose likely to be received by a trapped person closer to the storage ring. Depth dose profile in water phantom for 450MeV and 2.5GeV electron beam is generated, from which percentage energy absorbed in 30cm water phantom (analogous to human body) is calculated. The simulation showed the percentage energy deposition in the phantom is about 19% for 450MeV electron and 4.3% for 2.5GeV electron. The dose build up factor in 30cm water phantom for 450MeV and 2.5GeV electron beam are found to be 1.85 and 2.94 respectively. Based on the depth dose profile, dose equivalent index of 0.026Sv and 1.08Sv are likely to be received by the trapped person near the storage ring in Indus-1 and Indus-2 respectively. (author)

  13. Comparative study of two methods of analysis crystallinity, x-ray and DSC, using a linear low density polyethylene (LLDPE) injected, irradiated by gamma radiation

    International Nuclear Information System (INIS)

    Oliveira, Ana C.F. de; Ferreto, Helio F.R.; Parra, Duclerc F.; Lugao, Ademar B.

    2015-01-01

    The linear low density polyethylene (LLDPE) is a linear polymer chain with short chain branching. In this work, the LLDPE was irradiated in "6"0Co gamma source with 2000 kCi of activity, in presence of air, with doses of 5, 10, 20, 50 or 100 kGy, at about 5 kGy.h-1 dose rate, at room temperature. After irradiation, the samples were heated for 60 min at 100 deg C to promote recombination and annihilation of residual radicals. LLDPE injected and irradiated samples were characterized to identity the effects of terminal degradation, scission and crosslinking occurred in each dose. In the radiation process has changes in the crystallization and thus it is possible to compare the methods to obtain the percentage of crystallization of PELDB by DSC and X-ray. (author)

  14. DSC, FT-IR, NIR, NIR-PCA and NIR-ANOVA for determination of chemical stability of diuretic drugs: impact of excipients

    Directory of Open Access Journals (Sweden)

    Gumieniczek Anna

    2018-03-01

    Full Text Available It is well known that drugs can directly react with excipients. In addition, excipients can be a source of impurities that either directly react with drugs or catalyze their degradation. Thus, binary mixtures of three diuretics, torasemide, furosemide and amiloride with different excipients, i.e. citric acid anhydrous, povidone K25 (PVP, magnesium stearate (Mg stearate, lactose, D-mannitol, glycine, calcium hydrogen phosphate anhydrous (CaHPO4 and starch, were examined to detect interactions. High temperature and humidity or UV/VIS irradiation were applied as stressing conditions. Differential scanning calorimetry (DSC, FT-IR and NIR were used to adequately collect information. In addition, chemometric assessments of NIR signals with principal component analysis (PCA and ANOVA were applied.

  15. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  16. Calculations of fuel burn up and radionuclide inventories in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION codes

    International Nuclear Information System (INIS)

    Khattab, K.

    2005-01-01

    The WIMSD4 code is used to generate the fuel group constants and the infinite multiplication factor as a function of the reactor operating time for 10, 20, and 30 k W operating power levels. The uranium burn up rate and burn up percentage, the amounts of the plutonium isotopes, the concentrations and radioactivities of the fission products and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core are calculated using the WIMSD4 code as well. The CITATION code is used to calculate the changes in the effective multiplication factor of the reactor.(author)

  17. Qualitative and kinetic analysis of torrefaction of lignocellulosic biomass using DSC-TGA-FTIR

    Directory of Open Access Journals (Sweden)

    Bimal Acharya

    2015-11-01

    Full Text Available Torrefaction is a thermochemical conversion technique to improve the fuel properties of lignocellulosic biomass by treating at temperature 200 ℃-300 ℃ in the minimum oxygen environment for a reasonable residence time. In this study, thermal decomposition and thermal activities of miscanthus and wheat straw during the torrefaction at 200 ℃, 275 ℃, and 300 ℃ in a nitrogen environment for 45 minutes of residence time are analyzed in a simultaneous thermogravimetric analyzer (micro TGA with a differential scanning calorimetry (DSC, and a macro-TGA. The output of the micro TGA is fed into the Fourier transform infrared spectrometry (FTIR and qualitative analysis of the gaseous product is carried out. The composition of different gas products during the torrefaction of biomass are compared critically and kinetics were analyzed. It is found that the weight loss due to degradation of initial biomass in second stage (torrefaction process is a much faster conversion process than the weight loss process in the first stage (drying process. The weight loss of biomass increases with increase in the residence time and torrefaction treatment temperatures. The yield after torrefaction is a solid bio-coal product. The torrefied product were less reactive and has nearly 25% better heating value than the raw biomass. Between the two feedstocks studied, torrefied miscanthus proved to be a more stable fuel than the torrefied wheat straw. The major gaseous components observed during torrefaction are water, carbon dioxide, carbon monoxide, 1,2-Dibromethylene.

  18. Thermodynamic optimization of individual steel database by means of systematic DSC measurements according the CALPHAD approach

    International Nuclear Information System (INIS)

    Presoly, P; Bernhard, C; Six, J

    2016-01-01

    Reliable thermodynamic data are essential information required for the design of new steel types and are a prerequisite to effective process optimization and simulation. Moreover, it is important to know the exact temperatures at which the high-temperature phase transformations (T Liquid , T Solid , T Perit , T γ→δ ) occur in order to describe the solidification sequence and to describe further processing parameters. By utilizing DTA/DSC measurements, our earlier experimental studies of selected commercial DP, TRIP and high-Mn TWIP steels, have indicated that currently commercially available databases can often not be utilised to reliably describe the behaviour and microstructural development in such complex alloy systems. Because of these ostensible deficiencies, an experimental study was undertaken in an attempt to determine the pertaining thermodynamic data to analyse the behaviour of the important five- component Fe-C-Si-Mn-Al alloy system. High purity model alloys with systematic alloy variations were prepared and utilized in order to determine the influence of individual alloying elements in this complex, but industrially important alloy system. The present study provides new validated experimental thermodynamic data and analysis of the five-component Fe-C-Si- Mn-Al system, which will allow the construction of new phase diagrams, prediction of solidification sequences and the assessment of micro-segregation. (paper)

  19. Thermodynamic optimization of individual steel database by means of systematic DSC measurements according the CALPHAD approach

    Science.gov (United States)

    Presoly, P.; Six, J.; Bernhard, C.

    2016-03-01

    Reliable thermodynamic data are essential information required for the design of new steel types and are a prerequisite to effective process optimization and simulation. Moreover, it is important to know the exact temperatures at which the high-temperature phase transformations (TLiquid, TSolid, TPerit, Tγ→δ) occur in order to describe the solidification sequence and to describe further processing parameters. By utilizing DTA/DSC measurements, our earlier experimental studies of selected commercial DP, TRIP and high-Mn TWIP steels, have indicated that currently commercially available databases can often not be utilised to reliably describe the behaviour and microstructural development in such complex alloy systems. Because of these ostensible deficiencies, an experimental study was undertaken in an attempt to determine the pertaining thermodynamic data to analyse the behaviour of the important five- component Fe-C-Si-Mn-Al alloy system. High purity model alloys with systematic alloy variations were prepared and utilized in order to determine the influence of individual alloying elements in this complex, but industrially important alloy system. The present study provides new validated experimental thermodynamic data and analysis of the five-component Fe-C-Si- Mn-Al system, which will allow the construction of new phase diagrams, prediction of solidification sequences and the assessment of micro-segregation.

  20. FTIR, XRD and DSC studies of nanochitosan, cellulose acetate and polyethylene glycol blend ultrafiltration membranes.

    Science.gov (United States)

    Vinodhini, P Angelin; K, Sangeetha; Thandapani, Gomathi; P N, Sudha; Jayachandran, Venkatesan; Sukumaran, Anil

    2017-11-01

    In the present work, a series of novel nanochitosan/cellulose acetate/polyethylene glycol (NCS/CA/PEG) blend flat sheet membranes were fabricated in different ratios (1:1:1, 1:1:2, 2:1:1, 2:1:2, 1:2:1, 2:2:1) in a polar solvent of N,N'-dimethylformamide (DMF) using the most popular phase inversion method. Nanochitosan was prepared by the ionotropic gelation method and its average particle size has been analyzed using Dynamic Light Scattering (DLS) method. The effect of blending of the three polymers was investigated using FTIR and XRD studies. FTIR results confirmed the formation of well-blended membranes and the XRD analysis revealed enhanced amorphous nature of the membrane ratio 2:1:2. DSC study was conducted to find out the thermal behavior of the blend membranes and the results clearly indicated good thermal stability and single glass transition temperature (T g ) of all the prepared membranes. Asymmetric nature and rough surface morphology was confirmed using SEM analysis. From the results it was evident that the blending of the polymers with higher concentration of nanochitosan can alter the nature of the resulting membranes to a greater extent and thus amorphous membranes were obtained with good miscibility and compatibility. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Melting and thermal history of poly(hydroxybutyrate-co-hydroxyvalerate) using step-scan DSC

    International Nuclear Information System (INIS)

    Gunaratne, L.M.W.K.; Shanks, R.A.

    2005-01-01

    Melting behaviour and crystal morphology of poly(3-hydroxybutyrate) (PHB) and its copolymer of poly(3-hydroxybutyrate-co-3-hydroxyvalerate) with various hydroxyvalerate (HV) contents [5 wt.% (PHB5HV), 8 wt.% (PHB8HV) and 12 wt.% (PHB12HV)] have been investigated by conventional DSC, step-scan differential scanning calorimetry (SDSC) and hot-stage polarised optical microscopy (HSPOM). Crystallisation behaviour of PHB and its copolymers were investigated by SDSC. Thermal properties were investigated after different crystallisation treatments, fast, medium and slow cooling. Multiple melting peak behaviour was observed for all polymers. SDSC data revealed that PHB and its copolymers undergo melting-recrystallisation-remelting during heating, as evidenced by exothermic peaks in the IsoK baseline (non-reversing signal). An increase in degree of crystallinity due to significant melt-recrystallisation was observed for slow-cooled copolymers. PHB5HV showed different crystal morphologies for various crystallisation conditions. SDSC proved a convenient and precise method for measurement of the apparent thermodynamic specific heat (reversing signal) HSPOM results showed that the crystallisation rates and sizes of spherulites were significantly reduced as crystallisation rate increased

  2. DSC and X-ray diffraction investigations of phase transitions in HxBABA and NBABA

    International Nuclear Information System (INIS)

    Usha Deniz, K.; Paranjpe, A.S.; Mirza, E.B.; Parvathanathan, P.S.; Patel, K.S.

    1979-01-01

    The phase transitions and the heats of transformation, of the hexyl (HxBABA) and nonyl (NBABA) members of the series of compounds, p-n-Alkoxybenzylidene-p-Aminobenzoic Acids, have been studied by DSC in the temperature range, - 100 0 C to 300 0 C. A scheme of transitions has been proposed for each of the compounds. X-ray diffraction measurements have been done in the smectic C(Ssub(c)) and nematic (N) phases of these materials. The results reveal that (1) the Ssub(c) phase in both compounds is of the C 1 -type, (2) Ssub(c)-type order is seen throughout the nematic phase in HxBABA, whereas in NBABA, it is seen only in the neighbourhood of the Ssub(c)-N transition, (3) the temperature dependence of the smectic layer thickness, d, and of the directly measured tilt angle, theta sub(t,d), reflect faithfully the strength of the first order transition, Ssub(c)-N, and (4) there is a marked difference between the values and the temperature variations of theta sub(t,d) and theta sub(t,c) (tilt angle calculated from d) which is not completely understood, at present

  3. Complex Heat Capacity of Lithium Borate Glasses Studied by Modulated DSC

    International Nuclear Information System (INIS)

    Matsuda, Yu; Ike, Yuji; Matsui, Chihiro; Kodama, Masao; Kojima, Seiji

    2006-01-01

    Complex heat capacity, C p * = C p ' - iC p '', of lithium borate glasses Li2O·(1-x)B2O3 (x = 0.00 - 0.33) has been investigated by Modulated DSC (MDSC). We have successfully observed the frequency dependent C p * by MDSC in the frequency range 0.01 to 0.1 Hz, and the average relaxation time of glass transition has been determined as a function of temperature. Moreover, the composition dependence of the thermal properties has been investigated. The calorimetric glass transition temperatures become higher with the increase of concentration of Li2O and show the board maximum around x = 0.26-0.28. The width of glass transition region becomes narrower as Li2O increases. These results relate to the change of the fragility of the system. It has been proven that the complex heat capacity spectroscopy by MDSC is a powerful tool to investigate the glass transition phenomena

  4. Raman scattering and modulated-DSC experiments on Potassium Germanate glasses*

    Science.gov (United States)

    Wang, N.; Novita, D.; Boolchand, P.

    2006-03-01

    We have synthesized titled glasses in the 0 modulated-DSC (MDSC) experiments. Raman lineshapes observed in the present work are quite similar to those reported by Henderson and Wang ^1. Preliminary MDSC experiments reveal glass transition temperatures, Tg(x), starting from a value of 570 C at x = 0, to decrease to 508 C near x = 0.06, and to increase thereafter almost linearly to 552 C as x increases to 0.15. On the other hand, the non-reversing enthalpy associated with Tg provides evidence of a global minimum in the 0.08 0.10 as Floppy, while those in the reversibility window as representing the Intermediate Phase^2. The space filling nature of the Intermediate Phase is, independently, corroborated by trends in molar volumes which show a broad global minimum in the 9-11% range. Identification of the three elastic phases provides a physical basis to understand the origin of the Germanate anomaly, and the electrical conductivity threshold when glasses become mechanically floppy. *Supported by NSF grant DMR 04-56472. ^1 G.S.Henderson and H.M.Wang, Eur. J. Mineral. 14, 733 (2002). ^2 P.Boolchand, G.Lucovsky, J.C. Phillips and M.F.Thorpe, Phil. Mag 85,3823 (2005).

  5. Two DSC Glass Transitions in Miscible Blends of Polyisoprene / Poly(4-tert-butyl styrene)

    Science.gov (United States)

    Zhao, Junshu; Sun, Ye; Yu, Lian; Ediger, Mark

    2009-03-01

    Conventional and temperature modulated differential scanning calorimetry experiments have been carried out on miscible blends of polyisoprene (PI) and poly(4-tert-butyl styrene) (P4tBS) over a broad composition range. This system is characterized by an extraordinarily large component Tg difference (˜215 K) between the two homopolymers. Two distinct calorimetric Tgs were observed in blends with an intermediate composition range (25%˜50% PI) by both conventional and temperature modulated DSC. Good agreement was found between the Tg values measured by the two methods. Fitting of the measured Tgs to the Lodge-McLeish model gives a φself of 0.62˜0.64 for PI in this blend and 0.02˜0.05 for P4tBS. The extracted φself for PIis comparable to reported values for PEO in blends with PMMA and is significantly larger than those reported for other PI blends with smaller component Tg differences. This observation suggests the presence of a confinement effect in PI/P4tBS blends, which results in enhanced fast component dynamics below the effective Tg of the slow component.

  6. Application of Differential Scanning Calorimetry (DSC in study of phase transformations in ductile iron

    Directory of Open Access Journals (Sweden)

    R. Przeliorz

    2010-04-01

    Full Text Available The effect of heating rate on phase transformations to austenite range in ductile iron of the EN-GJS-450-10 grade was investigated. For studies of phase transformations, the technique of differential scanning calorimetry (DSC was used. Microstructure was examined by optical microscopy. The calorimetric examinations have proved that on heating three transformations occur in this grade of ductile iron, viz. magnetic transformation at the Curie temperature, pearlite→austenite transformation and ferrite→austenite transformation. An increase in the heating rate shifts the pearlite→austenite and ferrite→austenite transformations to higher temperature range. At the heating rate of 5 and 15°C/min, local extrema have been observed to occur: for pearlite→austenite transformation at 784°C and 795°C, respectively, and for ferrite→austenite transformation at 805°C and 821°C, respectively. The Curie temperature of magnetic transformation was extrapolated to a value of 740°C. Each transformation is related with a specific thermal effect. The highest value of enthalpy is accompanying the ferrite→austenite transformation, the lowest occurs in the case of pearlite→austenite transformation.

  7. TG-FTIR, DSC and quantum chemical studies of the thermal decomposition of quaternary methylammonium halides

    International Nuclear Information System (INIS)

    Sawicka, Marlena; Storoniak, Piotr; Skurski, Piotr; Blazejowski, Jerzy; Rak, Janusz

    2006-01-01

    The thermal decomposition of quaternary methylammonium halides was studied using thermogravimetry coupled to FTIR (TG-FTIR) and differential scanning calorimetry (DSC) as well as the DFT, MP2 and G2 quantum chemical methods. There is almost perfect agreement between the experimental IR spectra and those predicted at the B3LYP/6-311G(d,p) level: this has demonstrated for the first time that an equimolar mixture of trimethylamine and a methyl halide is produced as a result of decomposition. The experimental enthalpies of dissociation are 153.4, 171.2, and 186.7 kJ/mol for chloride, bromide and iodide, respectively, values that correlate well with the calculated enthalpies of dissociation based on crystal lattice energies and quantum chemical thermodynamic barriers. The experimental activation barriers estimated from the least-squares fit of the F1 kinetic model (first-order process) to thermogravimetric traces - 283, 244 and 204 kJ/mol for chloride, bromide and iodide, respectively - agree very well with theoretically calculated values. The theoretical approach assumed in this work has been shown capable of predicting the relevant characteristics of the thermal decomposition of solids with experimental accuracy

  8. A DSC study of deterioration caused by environmental chemical pollutants to parchment, a collagen-based material

    International Nuclear Information System (INIS)

    Budrugeac, Petru; Badea, Elena; Gatta, Giuseppe Della; Miu, Lucretia; Comanescu, Alina

    2010-01-01

    A DSC study of new parchments exposed at 25 o C for 1-16 weeks to controlled atmospheres containing 50 ppm of gaseous chemical pollutants (NO 2 , SO 2 , NO 2 + SO 2 ) and 50% relative humidity (RH) was performed. Samples were exposed to chemical pollutants alone, as well as after previous heating at 100 o C for 2-16 days and/or irradiating with visible light (1.7 x 10 5 lx) for 4-16 h. DSC measurements were performed in both sealed crucibles in static air atmosphere at 25-200 o C and open crucibles under gas flow (nitrogen, oxygen, synthetic air) at 25-280 o C. Analysis of DSC curves provided the variation induced by ageing on the thermodynamic parameters associated with both parchment denaturation and softening of collagen crystalline fraction. All the ageing procedures decreased both temperature and enthalpy of denaturation and increased broadness of DSC peak in function of ageing time. The occurrence of thermal oxidation peaks and/or lower temperature endothermic peaks was observed. The temperature of the first softening peak always indicated a general tendency to decrease as a function of ageing time. Shrinkage temperature of collagen fibres measured by thermomicroscopy also decreased as a result of accelerated ageing treatments.

  9. Cerebral perfusion alterations in epileptic patients during peri-ictal and post-ictal phase: PASL vs DSC-MRI.

    Science.gov (United States)

    Pizzini, Francesca B; Farace, Paolo; Manganotti, Paolo; Zoccatelli, Giada; Bongiovanni, Luigi G; Golay, Xavier; Beltramello, Alberto; Osculati, Antonio; Bertini, Giuseppe; Fabene, Paolo F

    2013-07-01

    Non-invasive pulsed arterial spin labeling (PASL) MRI is a method to study brain perfusion that does not require the administration of a contrast agent, which makes it a valuable diagnostic tool as it reduces cost and side effects. The purpose of the present study was to establish the viability of PASL as an alternative to dynamic susceptibility contrast (DSC-MRI) and other perfusion imaging methods in characterizing changes in perfusion patterns caused by seizures in epileptic patients. We evaluated 19 patients with PASL. Of these, the 9 affected by high-frequency seizures were observed during the peri-ictal period (within 5hours since the last seizure), while the 10 patients affected by low-frequency seizures were observed in the post-ictal period. For comparison, 17/19 patients were also evaluated with DSC-MRI and CBF/CBV. PASL imaging showed focal vascular changes, which allowed the classification of patients in three categories: 8 patients characterized by increased perfusion, 4 patients with normal perfusion and 7 patients with decreased perfusion. PASL perfusion imaging findings were comparable to those obtained by DSC-MRI. Since PASL is a) sensitive to vascular alterations induced by epileptic seizures, b) comparable to DSC-MRI for detecting perfusion asymmetries, c) potentially capable of detecting time-related perfusion changes, it can be recommended for repeated evaluations, to identify the epileptic focus, and in follow-up and/or therapy-response assessment. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. A DSC study of deterioration caused by environmental chemical pollutants to parchment, a collagen-based material

    Energy Technology Data Exchange (ETDEWEB)

    Budrugeac, Petru [National R and D Institute for Electrical Engineering, INCDIE-ICPE-CA, Splaiul Unirii 313, 030138 Bucharest (Romania); Badea, Elena, E-mail: elena.badea@unito.it [Department of Chemistry IFM, University of Turin, via Pietro Giuria 9, 10125 Torino (Italy); Gatta, Giuseppe Della [Department of Chemistry IFM, University of Turin, via Pietro Giuria 9, 10125 Torino (Italy); Miu, Lucretia [National R and D Institute for Textile and Leather-Div. Leather and Footwear, INCDTP-ICPI, str Ion Minulescu 93, 031215 Bucharest (Romania); Comanescu, Alina [National R and D Institute for Electrical Engineering, INCDIE-ICPE-CA, Splaiul Unirii 313, 030138 Bucharest (Romania)

    2010-03-10

    A DSC study of new parchments exposed at 25 {sup o}C for 1-16 weeks to controlled atmospheres containing 50 ppm of gaseous chemical pollutants (NO{sub 2}, SO{sub 2}, NO{sub 2} + SO{sub 2}) and 50% relative humidity (RH) was performed. Samples were exposed to chemical pollutants alone, as well as after previous heating at 100 {sup o}C for 2-16 days and/or irradiating with visible light (1.7 x 10{sup 5} lx) for 4-16 h. DSC measurements were performed in both sealed crucibles in static air atmosphere at 25-200 {sup o}C and open crucibles under gas flow (nitrogen, oxygen, synthetic air) at 25-280 {sup o}C. Analysis of DSC curves provided the variation induced by ageing on the thermodynamic parameters associated with both parchment denaturation and softening of collagen crystalline fraction. All the ageing procedures decreased both temperature and enthalpy of denaturation and increased broadness of DSC peak in function of ageing time. The occurrence of thermal oxidation peaks and/or lower temperature endothermic peaks was observed. The temperature of the first softening peak always indicated a general tendency to decrease as a function of ageing time. Shrinkage temperature of collagen fibres measured by thermomicroscopy also decreased as a result of accelerated ageing treatments.

  11. Psychometric Evaluation of the Diabetes Symptom Checklist-Revised (DSC-R)-A Measure of Symptom Distress

    NARCIS (Netherlands)

    Arbuckle, R.A.; Humphrey, L.; Vardeva, K.; Arondekar, B.; Scott, J.A.; Snoek, F.J.

    2009-01-01

    Objective: To assess the psychometric validity, reliability, responsiveness, and minimal important differences of the Diabetes Symptoms Checklist-Revised (DSC-R), a widely used patient-reported outcome measure of diabetes symptom distress. Research Design and Methods: Psychometric validity of the

  12. Systematic investigation of lard polymorphism using combined DSC and time-resolved synchrotron X-ray diffraction

    NARCIS (Netherlands)

    Kalnin, D.J.E.; Lesieur, P.; Artzner, F.; Keller, G.; Ollivon, M.

    2005-01-01

    The polymorphic behavior of lard was systematically investigated by differential scanning calorimetry (DSC) while simultaneously monitoring the formation of the different crystal forms with X-ray diffraction (XRDT). To interpret the complex polymorphic evolution of the sample analyzed by regular

  13. Evaluation of the physical stability and local crystallization of amorphous terfenadine using XRD-DSC and micro-TA

    International Nuclear Information System (INIS)

    Yonemochi, Etsuo; Hoshino, Takafumi; Yoshihashi, Yasuo; Terada, Katsuhide

    2005-01-01

    It is very difficult to follow rapid changes in polymorphic transformation and crystallization and to estimate the species recrystallized from the amorphous form. The aim of this study was to clarify the structural changes of amorphous terfenadine and to evaluate the polymorphs crystallized from amorphous samples using XRD-DSC and an atomic force microscope with a thermal probe (micro-TA). Amorphous samples were prepared by grinding or rapid cooling of the melt. The rapid structural transitions of samples were followed by the XRD-DSC system. On the DSC trace of the quenched terfenadine, two exotherms were observed, while only one exothermic peak was observed in the DSC scan of a ground sample. From the in situ data obtained by the XRD-DSC system, the stable form of terfenadine was recrystallized during heating of the ground amorphous sample, whereas the metastable form was recrystallized from the quenched amorphous sample and the crystallized polymorph changed to the stable form. Obtained data suggested that recrystallized species could be related to the homogeneity of samples. When the stored sample surface was scanned by atomic force microscopy (AFM), heterogeneous crystallization was observed. By using micro-TA, melting temperatures at various points were measured, and polymorph forms I and II were crystallized in each region. The percentages of the crystallized form I stored at 120 and 135 deg C were 47 and 79%, respectively. This result suggested that increasing the storage temperature increased the crystallization of form I, the stable form, confirming the temperature dependency of the crystallized form. The crystallization behavior of amorphous drug was affected by the annealing temperature. Micro-TA would be useful for detecting the inhomogeneities in polymorphs crystallized from amorphous drug

  14. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  15. Estudo termoanalítico de comprimidos revestidos contendo captopril através de termogravimetria (TG e calorimetria exploratória diferencial (DSC Thermal analysis study of captopril coated tablets by thermogravimetry (TG and differential scanning calorimetry (DSC

    Directory of Open Access Journals (Sweden)

    Giovana Carolina Bazzo

    2005-09-01

    Full Text Available No presente trabalho foram desenvolvidos comprimidos de captopril revestidos com hidroxipropilmetilcelulose (HPMC, Opadry®, polivinilpirrolidona (PVP, Eudragit® E e goma laca. Foi realizado estudo termoanalítico do fármaco e das formulações através de termogravimetria (TG e calorimetria exploratória diferencial (DSC. Através da análise das curvas DSC verificou-se que não houve a ocorrência de interação entre o fármaco e os excipientes lactose, celulose microcristalina, croscarmelose sódica, Aerosil® e talco, utilizados na formulação do comprimido. Através desta técnica detectou-se a possibilidade de interação entre captopril e estearato de magnésio. De acordo com os resultados obtidos através de DSC não foram observadas alterações na cristalinidade do fármaco decorrentes dos processos de compressão e revestimento. A termogravimetria foi utilizada para o estudo da cinética de degradação do captopril e dos comprimidos. Os parâmetros cinéticos foram determinados através do método de Ozawa. Os resultados demonstraram que não houve alteração da estabilidade térmica do captopril na forma de comprimido. A formulação revestida com HPMC foi a que apresentou maior estabilidade térmica, quando comparada às demais formulações de revestimento.In the present study, captopril coated tablets with hydroxypropylmethylcellulose (HPMC, Opadry®, polyvinylpirrolidone (PVP, Eudragit® and shellac were produced. Differential scanning calorimetry (DSC and thermogravimetry (TG were used to evaluate the thermal properties of the drug and the formulations. On the basis of DSC results, captopril was found to be compatible with lactose, microcrystalline cellulose, sodium croscarmellose, Aerosil® and talc. Some possibility of interaction between drug-excipient was observed with magnesium stearate. However, additional techniques to confirm the results obtained are needed. There was no influence of mechanical treatment (tableting

  16. Lattice Index Coding

    OpenAIRE

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  17. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  18. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  19. Improving a variation of the DSC technique for measuring the boiling points of pure compounds at low pressures

    International Nuclear Information System (INIS)

    Troni, Kelly L.; Damaceno, Daniela S.; Ceriani, Roberta

    2016-01-01

    Highlights: • Improvement of a variation of the DSC technique for boiling points at low pressures. • Use of a ballpoint pen ball over the pinhole of the DSC crucible. • Effects of configuration variables of the DSC technique accounted by factorial design. • An optimized region was obtained and tested for selected compounds. - Abstract: This study aims to improve a variation of the differential scanning calorimetry (DSC) technique for measuring boiling points of pure compounds at low pressures. Using a well-known n-paraffin (n-hexadecane), experimental boiling points at a pressure of 3.47 kPa with u(P) = 0.07 kPa were obtained by using a variation of the DSC technique, which consists of placing samples inside hermetically sealed aluminum crucibles, with a pinhole (diameter of 0.8 mm) made on the lid and a tungsten carbide ball with a diameter of 1.0 mm over it. Experiments were configured at nine different combinations of heating rates (K·min"−"1) and sample sizes (mg) following a full factorial design (2"2 trials plus a star configuration and three central points). Individual and combined effects of these two independent variables on the difference between experimental and estimated boiling points (NIST Thermo Data Engine v. 5.0 – Aspen Plus v. 8.4) were investigated. The results obtained in this work reveal that although both factors affect individually the accuracy of this variation of the DSC technique, the effect of heating rate is the most important. An optimized region of combinations of heating rate and sample size for determining boiling points of pure compounds at low pressures was obtained using the response-surface methodology (RSM). Within this optimized region, a selected condition, combining a heating rate of 24.52 K·min"−"1 and a sample size of (4.6 ± 0.5) mg, was tested for six different compounds (92.094–302.37 g mol"−"1) comprising four fatty compounds (tributyrin, monocaprylin, octanoic acid and 1-octadecanol), glycerol and n

  20. Software testing and source code for the calculation of clearance values. Final report; Erprobung von Software und Quellcode zur Berechnung von Freigabewerten. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Artmann, Andreas; Meyering, Henrich

    2016-11-15

    The GRS research project was aimed to the test the appropriateness of the software package ''residual radioactivity'' (RESRAD) for the calculation of clearance values according to German and European regulations. Comparative evaluations were performed with RESRAD-OFFSITE and the code SiWa-PRO DSS used by GRS and the GRS program code ARTM. It is recommended to use RESRAD-OFFSITE for comparative calculations. The dose relevant air-path dispersion of radionuclides should not be modeled using RESRAD-OFFSITE, the use of ARTM is recommended. The sensitivity analysis integrated into RESRAD-OFFSITE allows a fast identification of crucial parameters.

  1. GAMMA-CLOUD: a computer code for calculating gamma-exposure due to a radioactive cloud released from a point source

    Energy Technology Data Exchange (ETDEWEB)

    Sugimoto, O [Chugoku Electric Power Co. Inc., Hiroshima (Japan); Sawaguchi, Y; Kaneko, M

    1979-03-01

    A computer code, designated GAMMA-CLOUD, has been developed by specialists of electric power companies to meet requests from the companies to have a unified means of calculating annual external doses from routine releases of radioactive gaseous effluents from nuclear power plants, based on the Japan Atomic Energy Commission's guides for environmental dose evaluation. GAMMA-CLOUD is written in FORTRAN language and its required capacity is less than 100 kilobytes. The average ..gamma..-exposure at an observation point can be calculated within a few minutes with comparable precision to other existing codes.

  2. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  3. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  4. Comparing the co-evolution of production and test code in open source and industrial developer test processes through repository mining

    NARCIS (Netherlands)

    Van Rompaey, B.; Zaidman, A.E.; Van Deursen, A.; Demeyer, S.

    2008-01-01

    This paper represents an extension to our previous work: Mining software repositories to study coevolution of production & test code. Proceedings of the International Conference on Software Testing, Verification, and Validation (ICST), IEEE Computer Society, 2008; doi:10.1109/ICST.2008.47

  5. AC susceptibility, XRD and DSC studies of Sm1-xGdxMn2Si2 silicides

    International Nuclear Information System (INIS)

    Kervan, S.; Kilic, A.; Gencer, A.

    2004-01-01

    X-ray powder diffraction, AC susceptibility and differential scanning calorimetry (DSC) studies were performed on the polycrystalline Sm 1-x Gd x Mn 2 Si 2 (0≤x≤1) compounds. All compounds investigated crystallize in the body-centered tetragonal ThCr 2 Si 2 -type structure with the space group I4/mmm. Substitution of Gd for Sm leads to a linear decrease of the lattice constants and the unit cell volume. The lattice constants and the unit cell volume obey Vegard's law. At low temperatures, the rare earth sublattice orders and reconfigures the ordering in the Mn sublattice. The samples with x=0.6 and 0.8 exhibit spin reorientation phenomenon. The Neel temperature T N (Mn) determined by DSC technique and the Curie temperature T C (RE) increase linearly with increasing Gd content x. The results are summarized in the x-T magnetic phase diagram

  6. Rheological and DSC study of sol-gel transition in aqueous dispersions of industrially important polymers and colloids

    Energy Technology Data Exchange (ETDEWEB)

    Nishinari, K. [Osaka City Univ. (Japan). Dept. of Food and Nutrition

    1997-12-01

    Gelation kinetics, mechanical spectra, thermal scanning rheology (TSR), and differential scanning calorimetry (DSC) in aqueous solutions of gelling polymers and colloids such as seaweed polysaccharides (agarose, carrageenans), microbial polysaccharides (gellan, curdlan), plant polysaccharides (methylcellulose), globular proteins (casein, glycinin, {beta}-conglycinin), fibrous proteins (gelatin, fibrin), and polyvinyl alcohol, which are related to foods, cosmetics, biomedical and pharmaceutical applications, are described. Some gelation processes at a constant temperature have been treated successfully by an equation of first order kinetics or by other modified equations, and the molecular mechanism of gel formation is discussed briefly. For water-soluble polymers, the criterion of the gel or sol based on the frequency dependence of storage and loss moduli gives valuable informations. TSR and DSC are complementary, and the combination of these methods has been proved to be useful. (orig.) 81 refs.

  7. Pengaruh pengawetan kulit ikan buntal (Arothon reticularis terhadap suhu kerut ditinjau melalui analisis differential scanning calorimeter (DSC

    Directory of Open Access Journals (Sweden)

    RLM. Satrio Ari Wibowo

    2015-12-01

    Full Text Available The aim of this study was to determine the effect of the skin preservation type against shrinkage temperature of leather. The material used in this study was the skin of pufferfish (Arothon reticularis that have been preserved by salting, formaldehyde and pickling and also raw skin as a reference. The method used to measure the shrinkage temperature was thermal analysis using Differential Scanning Calorimeter (DSC that operated from 4°C up to 440°C with nitrogen stream. DSC measurement results showed that shrinkage temperature of puffer fish preserved with formaldehyde was higher than salting and pickling, which is 63.64°C; 47.95°C; 57.37oC respectively. The advantage of using formaldehyde compared to others preservation technique was not only can protect the skin from damage by microorganisms, but also can create a bond with the collagen .

  8. DSC studies of retrogradation and amylose-lipid transition taking place in gamma-irradiated wheat starch

    International Nuclear Information System (INIS)

    Ciesla, K.; Gluszewski, W.; Eliasson, A.C.

    2006-01-01

    It has been already shown that degradation resulting from gamma irradiation induces a decrease in order of starch granules and influences gelatinisation taking place during heating of starch and flour suspensions. In presented paper, DSC (differential scanning calorimetry) studies were carried out for wheat starch, non-irradiated and irradiated using doses in the range from 5 to 30 kGy. The influence of the conditions applied during DSC measurements on the possibility to observe differences between the amylose-lipid complex transition and retrogradation taking place in the non-irradiated and particularly irradiated starch samples was checked. The better differentiation between the amylose-lipid complex transition taking place in particular samples accompanied by the better reproducity were obtained in the case of dense suspensions as compared to the watery suspensions as well as during the first analysis performed for the recrystallised gels

  9. Longitudinal DSC-MRI for Distinguishing Tumor Recurrence From Pseudoprogression in Patients With a High-grade Glioma.

    Science.gov (United States)

    Boxerman, Jerrold L; Ellingson, Benjamin M; Jeyapalan, Suriya; Elinzano, Heinrich; Harris, Robert J; Rogg, Jeffrey M; Pope, Whitney B; Safran, Howard

    2017-06-01

    For patients with high-grade glioma on clinical trials it is important to accurately assess time of disease progression. However, differentiation between pseudoprogression (PsP) and progressive disease (PD) is unreliable with standard magnetic resonance imaging (MRI) techniques. Dynamic susceptibility contrast perfusion MRI (DSC-MRI) can measure relative cerebral blood volume (rCBV) and may help distinguish PsP from PD. A subset of patients with high-grade glioma on a phase II clinical trial with temozolomide, paclitaxel poliglumex, and concurrent radiation were assessed. Nine patients (3 grade III, 6 grade IV), with a total of 19 enhancing lesions demonstrating progressive enhancement (≥25% increase from nadir) on postchemoradiation conventional contrast-enhanced MRI, had serial DSC-MRI. Mean leakage-corrected rCBV within enhancing lesions was computed for all postchemoradiation time points. Of the 19 progressively enhancing lesions, 10 were classified as PsP and 9 as PD by biopsy/surgery or serial enhancement patterns during interval follow-up MRI. Mean rCBV at initial progressive enhancement did not differ significantly between PsP and PD (2.35 vs. 2.17; P=0.67). However, change in rCBV at first subsequent follow-up (-0.84 vs. 0.84; P=0.001) and the overall linear trend in rCBV after initial progressive enhancement (negative vs. positive slope; P=0.04) differed significantly between PsP and PD. Longitudinal trends in rCBV may be more useful than absolute rCBV in distinguishing PsP from PD in chemoradiation-treated high-grade gliomas with DSC-MRI. Further studies of DSC-MRI in high-grade glioma as a potential technique for distinguishing PsP from PD are indicated.

  10. Diclofenac Salts. V. Examples of Polymorphism among Diclofenac Salts with Alkyl-hydroxy Amines Studied by DSC and HSM

    OpenAIRE

    Fini, Adamo; Cavallari, Cristina; Ospitali, Francesca

    2010-01-01

    Nine diclofenac salts prepared with alkyl-hydroxy amines were analyzed for their properties to form polymorphs by DSC and HSM techniques. Thermograms of the forms prepared from water or acetone are different in most cases, suggesting frequent examples of polymorphism among these salts. Polymorph transition can be better highlighted when analysis is carried out by thermo-microscopy, which in most cases made it possible to observe the processes of melting of the metastable form and re-crystalli...

  11. The Application of ATD and DSC Methods to Study of the EN AC-48000 Alloy Phase Transformations

    Directory of Open Access Journals (Sweden)

    Piątkowski J.

    2017-06-01

    Full Text Available Tests concerning EN AC 48000 (AlSi12CuNiMg alloy phase transition covered (ATD thermal analysis and (DSC differential scanning calorimetry specifying characteristic temperatures and enthalpy of transformations. ATD thermal analysis shows that during cooling there exist: pre-eutectic crystallization effect of Al9Fe2Si phase, double eutectic and crystallization α(Al+β(Si and multi-component eutectic crystallization. During heating, DSC curve showed endothermic effect connected with melting of the eutectic α(Al+β(Si and phases: Al2Cu, Al3Ni, Mg2Si and Al9Fe2Si being its components. The enthalpy of this transformation constitutes approx. +392 J g-1. During freezing of the alloy, DSC curve showed two exothermal reactions. One is most likely connected with crystallization of Al9Fe2Si phase and the second one comes from freezing of the eutectic α(Al+β(Si. The enthalpy of this transformation constitutes approx. -340 J g-1. Calorimetric test was accompanied by structural test (SEM conducted with the use of optical microscope Reichert and scanning microscope Hitachi S-4200. There occurred solution’s dendrites α(Al, eutectic silicon crystal (β and two types of eutectic solution: double eutectic α(Al+β(Si and multi-component eutectic α+AlSiCuNiMg+β.

  12. Advances in simultaneous DSC-FTIR microspectroscopy for rapid solid-state chemical stability studies: some dipeptide drugs as examples.

    Science.gov (United States)

    Lin, Shan-Yang; Wang, Shun-Li

    2012-04-01

    The solid-state chemistry of drugs has seen growing importance in the pharmaceutical industry for the development of useful API (active pharmaceutical ingredients) of drugs and stable dosage forms. The stability of drugs in various solid dosage forms is an important issue because solid dosage forms are the most common pharmaceutical formulation in clinical use. In solid-state stability studies of drugs, an ideal accelerated method must not only be selected by different complicated methods, but must also detect the formation of degraded product. In this review article, an analytical technique combining differential scanning calorimetry and Fourier-transform infrared (DSC-FTIR) microspectroscopy simulates the accelerated stability test, and simultaneously detects the decomposed products in real time. The pharmaceutical dipeptides aspartame hemihydrate, lisinopril dihydrate, and enalapril maleate either with or without Eudragit E were used as testing examples. This one-step simultaneous DSC-FTIR technique for real-time detection of diketopiperazine (DKP) directly evidenced the dehydration process and DKP formation as an impurity common in pharmaceutical dipeptides. DKP formation in various dipeptides determined by different analytical methods had been collected and compiled. Although many analytical methods have been applied, the combined DSC-FTIR technique is an easy and fast analytical method which not only can simulate the accelerated drug stability testing but also at the same time enable to explore phase transformation as well as degradation due to thermal-related reactions. This technique offers quick and proper interpretations. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  14. The Art of Readable Code

    CERN Document Server

    Boswell, Dustin

    2011-01-01

    As programmers, we've all seen source code that's so ugly and buggy it makes our brain ache. Over the past five years, authors Dustin Boswell and Trevor Foucher have analyzed hundreds of examples of "bad code" (much of it their own) to determine why they're bad and how they could be improved. Their conclusion? You need to write code that minimizes the time it would take someone else to understand it-even if that someone else is you. This book focuses on basic principles and practical techniques you can apply every time you write code. Using easy-to-digest code examples from different languag

  15. Responsibilities of the exporting state derived from the application of the code of conduct on the safety and security of radioactive sources and the guidance on the import and export

    International Nuclear Information System (INIS)

    Vidal, Dora

    2008-01-01

    Full text: 'The exporting state in deciding whether to authorize an export of radioactive sources should satisfy itself, insofar as practicable: 1) That the recipient is authorized by the importing state to receive and possess the source in accordance with its laws and regulations; 2) That the importing state has the appropriate technical and administrative capability, resources and regulatory structure needed for the management of the source(s) in a manner consistent with the guidance in the code, and consider, based upon available information: i) Whether the recipient has been engaged in clandestine or illegal procurement of radioactive sources; ii) Whether an import or export authorization for radioactive sources has been denied to the recipient or importing state, or whether the recipient or importing state has made diverted for purposes inconsistent with the code any import or export of radioactive sources previously authorized; and iii) The risk of diversion or malicious acts involving radioactive sources'. It also should take, once it has decided to authorize the export, 'appropriate steps to ensure that the export is conducted in a manner consistent with existing relevant international standards relating to the transport of radioactive materials an the importing State is notified in advance of each shipment'. The Guidance has done a great effort in fixing the requirements that the importing State has to fulfill and it is the exporting State which has to verify, satisfy, and consider if these requirements are in place. It is remarkable the responsibility of the exporting state in analyzing the export from the point of view of the capabilities of the importing State to manage the sources with the purpose of use it has and even once it is not useful any more. This paper has the intention of bringing to reflect the responsibility of the exporting State in relation to those radioactive sources that are exported or are to be exported and have to fulfill with the

  16. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  17. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  18. Investigation of the Effects of Tissue Inhomogeneities on the Dosimetric Parameters of a Cs-137 Brachytherapy Source using the MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Mehdi Zehtabian

    2010-09-01

    Full Text Available Introduction: Brachytherapy is the use of small encapsulated radioactive sources in close vicinity of tumors. Various methods are used to obtain the dose distribution around brachytherapy sources. TG-43 is a dosimetry protocol proposed by the AAPM for determining dose distributions around brachytherapy sources. The goal of this study is to update this protocol for presence of bone and air inhomogenities.  Material and Methods: To update the dose rate constant parameter of the TG-43 formalism, the MCNP4C simulations were performed in phantoms composed of water-bone and water-air combinations. The values of dose at different distances from the source in both homogeneous and inhomogeneous phantoms were estimated in spherical tally cells of 0.5 mm radius using the F6 tally. Results: The percentages of dose reductions in presence of air and bone inhomogenities for the Cs-137 source were found to be 4% and 10%, respectively. Therefore, the updated dose rate constant (Λ will also decrease by the same percentages.   Discussion and Conclusion: It can be easily concluded that such dose variations are more noticeable when using lower energy sources such as Pd-103 or I-125.

  19. Electronic Contracts and the Personal data Protection of the Consumer: Sources Dialogue Between the Consumer Protection Code and the Internet Civil Mark.

    Directory of Open Access Journals (Sweden)

    Rosane Leal Da Silva

    2016-10-01

    Full Text Available This paper analyzes the personal data protection of the consumer and your vulnerability in interactive electronic contracts, aiming to point means of defense. For this, uses the deductive approach and starts of the electronic contracting to discuss the legal protection of the consumer in light of the capturing and processing of personal data by the furnisher. Considering the absence of law about personal data, concludes that electronic contracting expands the consumer vulnerability, which requires the principles application of the Consumer Protection Code, adding the Internet Civil Mark in relation to the privacy protection.

  20. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  1. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  2. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  3. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  4. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  5. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  6. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  7. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    Science.gov (United States)

    Meléndez, A.; Korenaga, J.; Sallares, V.; Ranero, C. R.

    2012-12-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also

  8. Diclofenac Salts. V. Examples of Polymorphism among Diclofenac Salts with Alkyl-hydroxy Amines Studied by DSC and HSM

    Directory of Open Access Journals (Sweden)

    Adamo Fini

    2010-04-01

    Full Text Available Nine diclofenac salts prepared with alkyl-hydroxy amines were analyzed for their properties to form polymorphs by DSC and HSM techniques. Thermograms of the forms prepared from water or acetone are different in most cases, suggesting frequent examples of polymorphism among these salts. Polymorph transition can be better highlighted when analysis is carried out by thermo-microscopy, which in most cases made it possible to observe the processes of melting of the metastable form and re-crystallization of the stable one. Solubility values were qualitatively related to the crystal structure of the salts and the molecular structure of the cation.

  9. DSC Studies of Retrogradation and Amylose-Lipid Complex Transition Taking Place in Gamma Irradiated Wheat Starch

    International Nuclear Information System (INIS)

    Ciesla, K.

    2006-01-01

    Degradation resulting from gamma irradiation induces decrease in order of starch granules and influences the processes occurring in starch-water system. Differential scanning calorimetry (DSC) was applied at present for studying the effect of radiation with doses of 5 - 30 kGy on amylose-lipid complex transition and retrogradation occurring in wheat starch gels. Influence of the conditions applied during DSC measurements and intermediate storage was tested on the possibility to observe radiation effect. Wheat starch was irradiated with 60 C o gamma rays in a gamma cell Issledovatiel placed in the Department of Radiation Chemistry, INCT. DSC measurements were performed for ca. 50% and ca. 20% gels during heating - cooling - heating cycles (up to 3 cycles) in the temperature range 10 - 150 degree at heating and cooling rates of 10, 5 and 2.5 degree min - 1. The Seiko DSC 6200 calorimeter was used. Decrease in amylose-lipid complex transition temperature was found already after irradiation of wheat starch with a dose of 5 kGy showing modificatin of the complex structure. The differences between the irradiated and the non-irradiated samples became the easier seen in the every foregoing heating or cooling cycle as compared to the preceeding one. It is because that thermal treatment causes decrease of transition temperature in all the irradiated samples, with no effect or increase of that temperature observed in the non-irradiated ones. Irradiation hinders retrogradation taking place in ca. 50% gels but facilitates retrogradation occurring in ca. 20 % gels. Moreover, the expanded differences between the amylose-lipid complex formed in the irradiated and non-irradiated gels result due to their recrystallisation. Storage of the gels induces decrease in the temperature of the complex transition as compared to the last cycle of the first analysis. That decrease was, however, more significant in the case of all the irradiated samples than in the case of the initial sample. In

  10. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  11. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  12. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  13. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  14. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  15. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  16. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  17. Combining Diffusion Tensor Metrics and DSC Perfusion Imaging: Can It Improve the Diagnostic Accuracy in Differentiating Tumefactive Demyelination from High-Grade Glioma?

    Science.gov (United States)

    Hiremath, S B; Muraleedharan, A; Kumar, S; Nagesh, C; Kesavadas, C; Abraham, M; Kapilamoorthy, T R; Thomas, B

    2017-04-01

    Tumefactive demyelinating lesions with atypical features can mimic high-grade gliomas on conventional imaging sequences. The aim of this study was to assess the role of conventional imaging, DTI metrics ( p:q tensor decomposition), and DSC perfusion in differentiating tumefactive demyelinating lesions and high-grade gliomas. Fourteen patients with tumefactive demyelinating lesions and 21 patients with high-grade gliomas underwent brain MR imaging with conventional, DTI, and DSC perfusion imaging. Imaging sequences were assessed for differentiation of the lesions. DTI metrics in the enhancing areas and perilesional hyperintensity were obtained by ROI analysis, and the relative CBV values in enhancing areas were calculated on DSC perfusion imaging. Conventional imaging sequences had a sensitivity of 80.9% and specificity of 57.1% in differentiating high-grade gliomas ( P = .049) from tumefactive demyelinating lesions. DTI metrics ( p : q tensor decomposition) and DSC perfusion demonstrated a statistically significant difference in the mean values of ADC, the isotropic component of the diffusion tensor, the anisotropic component of the diffusion tensor, the total magnitude of the diffusion tensor, and rCBV among enhancing portions in tumefactive demyelinating lesions and high-grade gliomas ( P ≤ .02), with the highest specificity for ADC, the anisotropic component of the diffusion tensor, and relative CBV (92.9%). Mean fractional anisotropy values showed no significant statistical difference between tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI and DSC parameters improved the diagnostic accuracy (area under the curve = 0.901). Addition of a heterogeneous enhancement pattern to DTI and DSC parameters improved it further (area under the curve = 0.966). The sensitivity increased from 71.4% to 85.7% after the addition of the enhancement pattern. DTI and DSC perfusion add profoundly to conventional imaging in differentiating tumefactive

  18. Revised SRAC code system

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.

    1986-09-01

    Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)

  19. ATD and DSC Analysis of IN-713C and ZhS6U-VI Superalloys

    Directory of Open Access Journals (Sweden)

    Binczyk F.

    2017-03-01

    Full Text Available Paper presents the results of ATD and DSC analysis of two superalloys used in casting of aircraft engine parts. The main aim of the research was to obtain the solidification parameters, especially Tsol and Tliq, knowledge of which is important for proper selection of casting and heat treatment parameters. Assessment of the metallurgical quality (presence of impurities of the feed ingots is also a very important step in production of castings. It was found that some of the feed ingots delivered by the superalloy producers are contaminated by oxides located in shrinkage defects. The ATD analysis allows for quite precise interpretation of first stages of solidification at which solid phases with low values of latent heat of solidification are formed from the liquid. Using DSC analysis it is possible to measure precisely the heat values accompanying the phase changes during cooling and heating which, with knowledge of phase composition, permits to calculate the enthalpy of formation of specific phases like γ or γ′.

  20. Combination of (M)DSC and surface analysis to study the phase behaviour and drug distribution of ternary solid dispersions.

    Science.gov (United States)

    Meeus, Joke; Scurr, David J; Chen, Xinyong; Amssoms, Katie; Davies, Martyn C; Roberts, Clive J; Van den Mooter, Guy

    2015-04-01

    Miscibility of the different compounds that make up a solid dispersion based formulation play a crucial role in the drug release profile and physical stability of the solid dispersion as it defines the phase behaviour of the dispersion. The standard technique to obtain information on phase behaviour of a sample is (modulated) differential scanning calorimetry ((M)DSC). However, for ternary mixtures (M)DSC alone is not sufficient to characterize their phase behaviour and to gain insight into the distribution of the active pharmaceutical ingredient (API) in a two-phased polymeric matrix. MDSC was combined with complementary surface analysis techniques, specifically time-of-flight secondary ion mass spectrometry (ToF-SIMS) and atomic force microscopy (AFM). Three spray-dried model formulations with varying API/PLGA/PVP ratios were analyzed. MDSC, TOF-SIMS and AFM provided insights into differences in drug distribution via the observed surface coverage for 3 differently composed ternary solid dispersions. Combining MDSC and surface analysis rendered additional insights in the composition of mixed phases in complex systems, like ternary solid dispersions.

  1. DSC and EPR investigations on effects of cholesterol component on molecular interactions between paclitaxel and phospholipid within lipid bilayer membrane.

    Science.gov (United States)

    Zhao, Lingyun; Feng, Si-Shen; Kocherginsky, Nikolai; Kostetski, Iouri

    2007-06-29

    Differential scanning calorimetry (DSC) and electron paramagnetic resonance spectroscopy (EPR) were applied to investigate effects of cholesterol component on molecular interactions between paclitaxel, which is one of the best antineoplastic agents found from nature, and dipalmitoylphosphatidylcholine (DPPC) within lipid bilayer vesicles (liposomes), which could also be used as a model cell membrane. DSC analysis showed that incorporation of paclitaxel into the DPPC bilayer causes a reduction in the cooperativity of bilayer phase transition, leading to a looser and more flexible bilayer structure. Including cholesterol component in the DPPC/paclitaxel mixed bilayer can facilitate the molecular interaction between paclitaxel and lipid and make the tertiary system more stable. EPR analysis demonstrated that both of paclitaxel and cholesterol have fluidization effect on the DPPC bilayer membranes although cholesterol has more significant effect than paclitaxel does. The reduction kinetics of nitroxides by ascorbic acid showed that paclitaxel can inhibit the reaction by blocking the diffusion of either the ascorbic acid or nitroxide molecules since the reaction is tested to be a first order one. Cholesterol can remarkably increase the reduction reaction speed. This research may provide useful information for optimizing liposomal formulation of the drug as well as for understanding the pharmacology of paclitaxel.

  2. Fundamentals of information theory and coding design

    CERN Document Server

    Togneri, Roberto

    2003-01-01

    In a clear, concise, and modular format, this book introduces the fundamental concepts and mathematics of information and coding theory. The authors emphasize how a code is designed and discuss the main properties and characteristics of different coding algorithms along with strategies for selecting the appropriate codes to meet specific requirements. They provide comprehensive coverage of source and channel coding, address arithmetic, BCH, and Reed-Solomon codes and explore some more advanced topics such as PPM compression and turbo codes. Worked examples and sets of basic and advanced exercises in each chapter reinforce the text's clear explanations of all concepts and methodologies.

  3. Monte Carlo calculation for the development of a BNCT neutron source (1eV-10KeV) using MCNP code.

    Science.gov (United States)

    El Moussaoui, F; El Bardouni, T; Azahra, M; Kamili, A; Boukhal, H

    2008-09-01

    Different materials have been studied in order to produce the epithermal neutron beam between 1eV and 10KeV, which are extensively used to irradiate patients with brain tumors such as GBM. For this purpose, we have studied three different neutrons moderators (H(2)O, D(2)O and BeO) and their combinations, four reflectors (Al(2)O(3), C, Bi, and Pb) and two filters (Cd and Bi). Results of calculation showed that the best obtained assembly configuration corresponds to the combination of the three moderators H(2)O, BeO and D(2)O jointly to Al(2)O(3) reflector and two filter Cd+Bi optimize the spectrum of the epithermal neutron at 72%, and minimize the thermal neutron to 4% and thus it can be used to treat the deep tumor brain. The calculations have been performed by means of the Monte Carlo N (particle code MCNP 5C). Our results strongly encourage further studying of irradiation of the head with epithermal neutron fields.

  4. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  5. SKEMA - A computer code to estimate atmospheric dispersion

    International Nuclear Information System (INIS)

    Sacramento, A.M. do.

    1985-01-01

    This computer code is a modified version of DWNWND code, developed in Oak Ridge National Laboratory. The Skema code makes an estimative of concentration in air of a material released in atmosphery, by ponctual source. (C.M.) [pt

  6. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  7. Determination of the multiplication factor and its bias by the 252Cf-source technique: A method for code benchmarking with subcritical configurations

    International Nuclear Information System (INIS)

    Perez, R.B.; Valentine, T.E.; Mihalczo, J.T.; Mattingly, J.K.

    1997-01-01

    A brief discussion of the Cf-252 source driven method for subcritical measurements serves as an introduction to the concept and use of the spectral ratio, Γ. It has also been shown that the Monte Carlo calculation of spectral densities and effective multiplication factors have as a common denominator the transport propagator. This commonality follows from the fact that the Neumann series expansion of the propagator lends itself to the Monte Carlo method. On this basis a linear relationship between the spectral ratio and the effective multiplication factor has been shown. This relationship demonstrates the ability of subcritical measurements of the ratio of spectral densities to validate transport theory methods and cross sections

  8. Code of Practice

    International Nuclear Information System (INIS)

    Doyle, Colin; Hone, Christopher; Nowlan, N.V.

    1984-05-01

    This Code of Practice introduces accepted safety procedures associated with the use of alpha, beta, gamma and X-radiation in secondary schools (pupils aged 12 to 18) in Ireland, and summarises good practice and procedures as they apply to radiation protection. Typical dose rates at various distances from sealed sources are quoted, and simplified equations are used to demonstrate dose and shielding calculations. The regulatory aspects of radiation protection are outlined, and references to statutory documents are given

  9. Evaluation of the modified nanoclay effect on the vulcanization of SBR through rheometric curve and DSC;Avaliacao do efeito de nanoargila modificada na vulcanizacao de SBR atraves da curva reometrica e DSC

    Energy Technology Data Exchange (ETDEWEB)

    Forte, Maria Madalena C.; Brito, Karin J.S., E-mail: mmcforte@ufrgs.b [Universidade Federal do Rio Grande do Sul (PPGEM/UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Gheller Junior, Jordao [SENAI, Sao Leopoldo, RS (Brazil). Centro Tecnologico de Polimeros

    2009-07-01

    Rubber nanocomposites with nanoclays organically modified by quaternary ammonium salts may have the curing features modified significantly, since the salts may act on the rubber cure system. The aim of this work is to evaluate the influences of an organically modified montmorillonite (OMMT) on the curing reaction of an SBR (styrene butadiene rubber) with sulfur. The SBR/OMMT nanocomposites were prepared by co-coagulating SBR latex and Cloisite{sup R} 20A aqueous suspension at different nanoclay concentrations. The OMMT effect on the sulfur curing reaction was evaluated by the rheometric curve using a rheometer type RPA (Rubber Process Analyzer) and the heat of vulcanization (DELTAH{sub v}) using Differential Scanning Calorimetry (DSC). The evaluation of the clay nanolayers dispersion in the SBR matrix was accomplished by x-ray diffraction (XRD) analysis. (author)

  10. Dynamics of Polymorphic Transformations in Palm Oil, Palm Stearin and Palm Kernel Oil Characterized by Coupled Powder XRD-DSC.

    Science.gov (United States)

    Zaliha, Omar; Elina, Hishamuddin; Sivaruby, Kanagaratnam; Norizzah, Abd Rashid; Marangoni, Alejandro G

    2018-06-01

    The in situ polymorphic forms and thermal transitions of refined, bleached and deodorized palm oil (RBDPO), palm stearin (RBDPS) and palm kernel oil (RBDPKO) were investigated using coupled X-ray diffraction (XRD) and differential scanning calorimetry (DSC). Results indicated that the DSC onset crystallisation temperature of RBDPO was at 22.6°C, with a single reflection at 4.2Å started to appear from 23.4 to 17.1°C, and were followed by two prominent exothermic peaks at 20.1°C and 8.5°C respectively. Further cooling to -40°C leads to the further formation of a β'polymorph. Upon heating, a of β'→βtransformation was observed between 32.1 to 40.8°C, before the sample was completely melted at 43.0°C. The crystallization onset temperature of RBDPS was 44.1°C, with the appearance of the α polymorph at the same temperature as the appearance of the first sharp DSC exothermic peak. This quickly changed from α→β´ in the range 25 to 21.7°C, along with the formation of a small β peak at -40°C. Upon heating, a small XRD peak for the β polymorph was observed between 32.2 to 36.0°C, becoming a mixture of (β´+ β) between 44.0 to 52.5°C. Only the β polymorph survived further heating to 59.8°C. For RBDPKO, the crystallization onset temperature was 11.6°C, with the formation of a single sharp exothermic peak at 6.5°C corresponding to the β' polymorphic form until the temperature reached -40°C. No transformation of the polymorphic form was observed during the melting process of RBDPKO, before being completely melted at 33.2°C. This work has demonstrated the detailed dynamics of polymorphic transformations of PKO and PS, two commercially important hardstocks used widely by industry and will contribute to a greater understanding of their crystallization and melting dynamics.

  11. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  12. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  13. Azide derivatized anticancer agents of Vitamin K 3: X-ray structural, DSC, resonance spectral and API studies

    Science.gov (United States)

    Badave, Kirti; Patil, Yogesh; Gonnade, Rajesh; Srinivas, Darbha; Dasgupta, Rajan; Khan, Ayesha; Rane, Sandhya

    2011-12-01

    Compound 1 [1-imino (acetyl hydrazino)-Vitamin K 3], displays valence tautomerically related electronic isomers as Form I and Form II. Form I exhibits 2D packing fragment with 1D ribbon chains of N-H⋯O hydrogen bonds and shows EPR silent features. While Form II is EPR active and exhibits biradical nature with double quantum transitions at g = 2.0040. 1H NMR of compound 2, [1-imino (hydrazino carboxylate)-Vitamin K 3] and Form II exhibit π delocalization via resonance assisted H-bonding [RAHB] effect compared to Form I. Molecular interactions in Form I and II are visualized by DSC. The electronic structures of compounds 1 and 2 have been correlated to their API values by measuring anticancer activities, mitochondrial potentials and DNA shearing patterns. Form II and compound 2 indicate mitochondria mediated apoptosis (˜75% cell death) while Form I causes 35% cell death.

  14. The effects of urea and n-propanol on collagen denaturation: using DSC, circular dicroism and viscosity

    International Nuclear Information System (INIS)

    Usha, R.; Ramasami, T.

    2004-01-01

    The effect of urea and n-propanol on circular dichroism (CD) and viscosity of purified type1 collagen solution at various temperatures and differential scanning calorimetry (DSC) of rat-tail tendon (RTT) collagen fibre have been studied. CD reveals a spectrum with a positive peak at around 220 nm and a negative peak at 200 nm characteristics of collagen triple helix. The molar ellipticity decreases as the concentration of urea increases up to particular concentration (collagen solution treated with 265 μM of urea) and after that it increases (collagen solution treated with 500 μM of urea). There is a linear decrease in molar ellipticity as the concentration of n-propanol increases. Denaturation temperature of urea and n-propanol treated with purified collagen solution has been studied using viscosity method. Additives such as urea and n-propanol decrease the thermal stability of collagen triple helix in solution and in RTT collagen fibre. Thermal helix to coil transition of urea and n-propanol treated collagen depends on the degree of hydration and the concentration of these additives. Thermodynamic parameters such as the peak temperature, enthalpy of activation, and energy of activation for collagen-gelatin transition for native, urea and n-propanol treated RTT collagen fibre has been calculated using DSC. The change in the thermodynamic parameters has been observed for native, urea and n-propanol treated RTT collagen fibres. The experimental results show that the change in the water structure, dehydration and desolvation induced by different additives such as urea and n-propanol on RTT may vary with the type of denaturation

  15. Characterizing crystal disorder of trospium chloride: a comprehensive,(13) C CP/MAS NMR, DSC, FTIR, and XRPD study.

    Science.gov (United States)

    Urbanova, Martina; Sturcova, Adriana; Brus, Jiri; Benes, Hynek; Skorepova, Eliska; Kratochvil, Bohumil; Cejka, Jan; Sedenkova, Ivana; Kobera, Libor; Policianova, Olivia; Sturc, Antonin

    2013-04-01

    Analysis of C cross-polarization magic angle spinning (CP/MAS) nuclear magnetic resonance (NMR), differential scanning calorimetry (DSC), Fourier transform infrared (FTIR), and X-ray powder diffraction data of trospium chloride (TCl) products crystallized from different mixtures of water-ethanol [φ(EtOH) = 0.5-1.0] at various temperatures (0°C, 20°C) and initial concentrations (saturated solution, 30%-50% excess of solvent) revealed extensive structural variability of TCl. Although (13) C CP/MAS NMR spectra indicated broad variety of structural phases arising from molecular disorder, temperature-modulated DSC identified presence of two distinct components in the products. FTIR spectra revealed alterations in the hydrogen bonding network (ionic hydrogen bond formation), whereas the X-ray diffraction reflected unchanged unit cell parameters. These results were explained by a two-component character of TCl products in which a dominant polymorphic form is accompanied by partly separated nanocrystalline domains of a secondary phase that does not provide clear Bragg reflections. These phases slightly differ in the degree of molecular disorder, in the quality of crystal lattice and hydrogen bonding network. It is also demonstrated that, for the quality control of such complex products, (13) C CP/MAS NMR spectroscopy combined with factor analysis (FA) can satisfactorily be used for categorizing the individual samples: FA of (13) C CP/MAS NMR spectra found clear relationships between the extent of molecular disorder and crystallization conditions. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 102:1235-1248, 2013. Copyright © 2013 Wiley Periodicals, Inc.

  16. Dosimetry boron neutron capture therapy in liver cancer (hepatocellular carcinoma) by means of MCNP-code with neutron source from thermal column

    International Nuclear Information System (INIS)

    Irhas; Andang Widi Harto; Yohannes Sardjono

    2014-01-01

    Boron Neutron Capture Therapy (BNCT) using physics principle when B 10 (Boron-10) irradiated by low energy neutron (thermal neutron). Boron and thermal neutron reaction produced B 11m (Boron-11m) (t 1/2 =10 -2 s). B 11m decay emitted alpha, Li 7 (Lithium-7) particle and gamma ray. Irradiated time needed to ensure cancer dose enough. Liver cancer was primary malignant who located in liver (Hepatocellular carcinoma). Malignant in liver were different to metastatic from Breast, Colon Cancer, and the other. This condition was Metastatic Liver Cancer. Monte Carlo method used by Monte Carlo N-Particle (MCNP) Software. Probabilistic approach used for probability of interaction occurred and record refers to characteristic of particle and material. In this case, thermal neutron produced by model of Collimated Thermal Column Kartini Research Nuclear Reactor, Yogyakarta. Modelling organ and source used liver organ that contain of cancer tissue and research reactor. Variation of boron concentration was 20, 25, 30, 35, 40, 45, and 47 µg/g cancers. Output of MCNP calculation were neutron scattering dose, gamma ray dose and neutron flux from reactor. Neutron flux used to calculate alpha, proton and gamma ray dose from interaction of tissue material and thermal neutron. Variation of boron concentration result dose rate to every variation were 0,059; 0,072; 0,084; 0,098; 0.108; 0,12; 0,125 Gy/sec. Irradiation time who need to every concentration were 841,5 see (14 min 1 sec); 696,07 sec(11 min 36 sec); 593.11 sec (9 min 53 sec); 461,35 sec (8 min 30 sec); 461,238 sec (7 min 41 sec); 414,23 sec (6 min 54 sec); 398,38 sec (6 min 38 sec). Irradiating time could shortly when boron concentration more high. (author)

  17. New code of conduct

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  18. MELCOR computer code manuals

    Energy Technology Data Exchange (ETDEWEB)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  19. MELCOR computer code manuals

    International Nuclear Information System (INIS)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR's phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package

  20. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  1. DSC and HRTEM investigation of the precipitates in Al-1.0%Mg{sub 2} Si-0.5%Ag alloy

    Energy Technology Data Exchange (ETDEWEB)

    Gaber, A.; Ali, A.M.; Zou, Y. [Toyama University (Japan). Venture Business Laboratory; Matsuda, K.; Ikeno, S. [Toyama University (Japan). Faculty of Engineering

    2004-12-15

    The understanding and control of nanoscale precipitation in an Al-1.0 wt-%Mg{sub 2} Si-0.5 wt-% Ag alloy during artificial aging is critical for achieving optimum mechanical properties. To achieve this objective, both differential scanning calorimetry (DSC) and high resolution transmission electron microscopy (HRTEM) have been utilised. The non-isothermal DSC thermograms exhibited eight reaction peaks; six are exothermic (precipitation) and two are endothermic (dissolution) reactions. The activation energies associated with the individual precipitates are determined. With the aid of HRTEM, the evolved precipitates have been characterised. (author)

  2. Bronsted acid site number evaluation using isopropylamine decomposition on Y-zeolite contaminated with vanadium in a simultaneous DSC-TGA analyzer

    International Nuclear Information System (INIS)

    Osorio Perez, Yonnathan; Forero, Liliam Alexandra Palomeque; Torres, Diana Vanessa Cristiano; Trujillo, Carlos Alexander

    2008-01-01

    Acid-site catalyzed decomposition of isopropylamine was followed in a simultaneous DSC-TGA analyzer. USY zeolite samples with and without vanadium were studied. Results show that acid sites number decreases linearly with vanadium concentration in zeolite indicating that vanadium neutralizes acid sites on catalyst and the metal is able to move on the surface of the solid. The neutralizing species probably contain only one vanadium atom. The reaction enthalpy plus desorption heat of the products show that vanadium preferentially neutralizes the strongest acid sites on the zeolite. The application of the simultaneous DSC-TGA technique to quantify Bronsted acid sites on solids by this reaction is novel

  3. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  4. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  5. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  6. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  7. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  8. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  9. Dublin South Central (DSC)

    LENUS (Irish Health Repository)

    O'Gorman, Clodagh S M

    2010-12-01

    Children who appear healthy, even if they have one or more recognized cardiovascular risk factors, do not generally have outcomes of cardiovascular or other vascular disease during childhood. Historically, pediatric medicine has not aggressively screened for or treated cardiovascular risk factors in otherwise healthy children. However, studies such as the P-Day Study (Pathobiological Determinants of Atherosclerosis in Youth), and the Bogalusa Heart Study, indicate that healthy children at remarkably young ages can have evidence of significant atherosclerosis. With the increasing prevalence of pediatric obesity, can we expect more health problems related to the consequences of pediatric dyslipidemia, hypertriglyceridemia, and atherosclerosis in the future? For many years, medications have been available and used in adult populations to treat dyslipidemia. In recent years, reports of short-term safety of some of these medications in children have been published. However, none of these studies have detailed long-term follow-up, and therefore none have described potential late side-effects of early cholesterol-lowering therapy, or potential benefits in terms of reduction of or delay in cardiovascular or other vascular end-points. In 2007, the American Heart Association published a scientific statement on the use of cholesterol-lowering therapy in pediatric patients. In this review paper, we discuss some of the current literature on cholesterol-lowering therapy in children, including the statins that are currently available for use in children, and some of the cautions with using these and other cholesterol-lowering medications. A central tenet of this review is that medications are not a substitute for dietary and lifestyle interventions, and that even in children on cholesterol-lowering medications, physicians should take every opportunity to encourage children and their parents to make healthy diet and lifestyle choices.

  10. Study on the characterization and thermal decomposition of uranium compounds by thermogravimetry (TG) and differential scanning calorimetry (DSC)

    International Nuclear Information System (INIS)

    Dantas, J.M.; Abrao, A.

    1981-04-01

    A contribution to the characterization of several uranium compounds obtained at the IPEN' Uranium Pilot Plant is given. Particularly, samples of ammonium diuranate (ADU) and uranium oxides were studied. The main objective was to know the stoichiometry of the ADU and the oxides resulting from its thermal transformation. ADU samples were prepared by batchwise precipitation, stationary dewatering into stove and batchwise thermal decomposition, or, alternatively, continuous precipitation, continuous filtration, continuous drying and continuous thermal decomposition inside a temperature gradient electrical furnace. All ADU were precipitated using NH 3 gas from uranul sulfate or uranyl nitrate solutions. The thermal decomposition of ADU and uranium oxides were studied in an air atmosphere by thermogravimetry (TG) and differential scanning calorimetry (DSC). Any correlation between the parameters of precipitation, drying, calcination and the hystory of the obtaintion of the several uraniumm compounds and their initial and final composition was looked for. Heating program was established to have the U 3 O 8 oxide as the final product. Intermediary phases were tentatively identified. Temperatures at which occurred the absorption water elimination, crystallization water elimination, evolution or oxidation of NH 3 , decomposition of NO -3 ion and oxygen evolution and the exo- and endothermic process for each sample were identified. (Author) [pt

  11. Evaluation of crystallization kinetics of poly (ether-ketone-ketone and poly (ether-ether-ketone by DSC

    Directory of Open Access Journals (Sweden)

    Gibran da Cunha Vasconcelos

    2010-08-01

    Full Text Available The poly (aryl ether ketones are used as matrices in advanced composites with high performance due to its high thermal stability, excellent environmental performance and superior mechanical properties. Most of the physical, mechanical and thermodynamic properties of semi-crystalline polymers depend on the degree of crystallinity and morphology of the crystalline regions. Thus, a study on the crystallization process promotes a good prediction of how the manufacturing parameters affect the developed structure, and the properties of the final product. The objective of this work was to evaluate the thermoplastics polymers PEKK e PEEK by DSC, aiming to obtain the relationship between kinetics, content, nucleation and geometry of the crystalline phases, according to the parameters of the Avrami and Kissinger models. The analysis of the Avrami exponents obtained for the studied polymers indicates that both showed the formation of crystalline phases with heterogeneous nucleation and growth geometry of the type sticks or discs, depending on the cooling conditions. It was also found that the PEEK has a higher crystallinity than PEKK.

  12. Comparison of first pass bolus AIFs extracted from sequential 18F-FDG PET and DSC-MRI of mice

    International Nuclear Information System (INIS)

    Evans, Eleanor; Sawiak, Stephen J.; Ward, Alexander O.; Buonincontri, Guido; Hawkes, Robert C.; Adrian Carpenter, T.

    2014-01-01

    Accurate kinetic modelling of in vivo physiological function using positron emission tomography (PET) requires determination of the tracer time–activity curve in plasma, known as the arterial input function (AIF). The AIF is usually determined by invasive blood sampling methods, which are prohibitive in murine studies due to low total blood volumes. Extracting AIFs from PET images is also challenging due to large partial volume effects (PVE). We hypothesise that in combined PET with magnetic resonance imaging (PET/MR), a co-injected bolus of MR contrast agent and PET ligand can be tracked using fast MR acquisitions. This protocol would allow extraction of a MR AIF from MR contrast agent concentration–time curves, at higher spatial and temporal resolution than an image-derived PET AIF. A conversion factor could then be applied to the MR AIF for use in PET kinetic analysis. This work has compared AIFs obtained from sequential DSC-MRI and PET with separate injections of gadolinium contrast agent and 18 F-FDG respectively to ascertain the technique′s validity. An automated voxel selection algorithm was employed to improve MR AIF reproducibility. We found that MR and PET AIFs displayed similar character in the first pass, confirmed by gamma variate fits (p<0.02). MR AIFs displayed reduced PVE compared to PET AIFs, indicating their potential use in PET/MR studies

  13. An evaluation of the use of modulated temperature DSC as a means of assessing the relaxation behaviour of amorphous lactose.

    Science.gov (United States)

    Craig, D Q; Barsnes, M; Royall, P G; Kett, V L

    2000-06-01

    To evaluate the use of Modulated Temperature DSC (MTDSC) as a means of assessing the relaxation behaviour of amorphous lactose via measurement of the heat capacity, glass transition (Tg) and relaxation endotherm. Samples of amorphous lactose were prepared by freeze drying. MTDSC was conducted using a TA Instruments 2920 MDSC using a heating rate of 2 degrees C/minute, a modulation amplitude of +/-0.3 degrees C and a period of 60 seconds. Samples were cycled by heating to 140 degrees C and cooling to a range of annealing temperatures between 80 degrees C and 100 degrees C, followed by reheating through the Tg region. Systems were then recooled to allow for correction of the Tg shift effect. MTDSC enabled separation of the glass transition from the relaxation endotherm, thereby facilitating calculation of the relaxation time as a function of temperature. The relative merits of using MTDSC for the assessment of relaxation processes are discussed. In addition, the use of the fictive temperature rather than the experimentally derived Tg is outlined. MTDSC allows assessment of the glass transition temperature, the magnitude of the relaxation endotherm and the value of the heat capacity, thus facilitating calculation of relaxation times. Limitations identified with the approach include the slow scanning speed, the need for careful choice of experimental parameters and the Tg shift effect.

  14. An investigation into the effects of residual water on the glass transition temperature of polylactide microspheres using modulated temperature DSC.

    Science.gov (United States)

    Passerini, N; Craig, D Q

    2001-05-18

    The objective of the study was to ascertain residual water levels in polylactide and polylactide-co-glycolide microspheres prepared using the solvent evaporation technique and to investigate the effects of that water on the glass transitional behaviour of the microspheres. Microspheres were prepared from polylactic acid (PLA) and polylactide-co-glycolide (PLGA) 50:50 and 75:25 using a standard solvent evaporation technique. The glass transition was measured as a function of drying conditions using modulated temperature DSC. The microspheres were found to contain very low levels of dichloromethane, while residual water levels of up to circa 3% w/w were noted after freeze or oven drying, these levels being higher for microspheres containing higher glycolic acid levels. The residual water was found to lower the T(g) following the Gordon-Taylor relationship. The data indicate that the microparticles may retain significant water levels following standard preparation and drying protocols and that this drying may markedly lower the T(g) of the spheres.

  15. Melting and crystallization of in-situ polymerized cyclic butylene terephthalates with and without organoclay: a modulated DSC study

    Directory of Open Access Journals (Sweden)

    2007-02-01

    Full Text Available The polymerization of cyclic butylene terephthalate oligomers (CBT were studied in presence (in 5 wt.% and absence of an organoclay (Cloisite® 30B by modulated DSC (MDSC. The organoclay containing samples were produced by dry and melt blending, respectively. The first heating, causing the polymerization of the CBT catalyzed by an organotin compound, was followed by cooling prior to the second heating. The MDSC scans covered the temperature interval between 0 and 260°C. The aim of this protocol was to study the crystallization and melting behavior of the resulting polybutylene terephthalate (pCBT and its organoclay modified nanocomposites. It was found that the thermal behaviors of the polymerizing and polymerized CBT (pCBT were strongly affected by the sample preparation. The organoclay suppressed the crystallization of the pCBT produced during the first heating. However, results from the second heating suggest that more perfect crystallites were formed in the organoclay modified pCBT variants. The organoclay also affected the conversion and mean molecular mass of the resulting pCBT which were slightly lower than those of the plain pCBT polymerized under identical conditions.

  16. The detection of amorphous material in a nominally crystalline drug using modulated temperature DSC--a case study.

    Science.gov (United States)

    Saklatvala, R; Royall, P G; Craig, D Q

    1999-12-01

    Two batches (1 and 2) of an experimental drug (L7) which have shown marked differences in their chemical stability profiles were examined with a view to identifying the presence of small quantities of amorphous material using modulated temperature DSC (MTDSC). The external morphological characteristics of the two batches were similar although marked differences were seen in the moisture uptake profiles. MTDSC studies indicated that while no evidence for a glass transition could be seen for Batch 1, a T(g) and accompanying relaxation endotherm were observed for Batch 2. Comparison with a glassy form of the drug indicated that the amorphous content was in the region of 5-6% w/w in Batch 2. Dynamic moisture sorption studies indicated that while Batch 2 showed a higher uptake profile than Batch 1, addition of 5% w/w amorphous material to Batch 1 led to the establishment of a very similar profile to that seen for Batch 2. It was concluded that Batch 2 contains amorphous material which is responsible for the greater moisture uptake (and by implication poor chemical stability) of this sample and that the glass transition of this fraction may be characterised using MTDSC.

  17. Formation of nanotubes in poly (vinylidene fluoride): Application as solid polymer electrolyte in DSC fabricated using carbon counter electrode

    Energy Technology Data Exchange (ETDEWEB)

    Muthuraaman, B. [Department of Energy, University of Madras, Guindy campus, Chennai 600025 (India); Maruthamuthu, P., E-mail: pmaruthu@yahoo.com [Department of Energy, University of Madras, Guindy campus, Chennai 600025 (India)

    2011-09-01

    Highlights: > Incorporation of a {pi}-electron donor compound as dopant in poly(vinylidene fluoride) along with redox couple (I{sup -}/I{sub 3}{sup -}) which forms brush like nanotubes. > Investigations about the use of conducting carbon coated FTO as a durable counter electrode and its effects in DSC. > High charge separation and the channelized flow of electrons in the nanotubes in electrolyte favors stable performance. - Abstract: In the present work, we report the incorporation of 2,2'-Azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt (ABTS) in poly(vinylidene fluoride) (PVDF) along with the redox couple (I{sup -}/I{sub 3}{sup -}). When ABTS, a {pi}-electron donor, is used to dope PVDF, the polymer composite forms brush-like nanotubes and has been successfully used as a solid polymer electrolyte in dye-sensitized solar cells. Under the given conditions, the electrolyte composition forms nanotubes while it is doped with ABTS, a {pi}-electron donor. With this new electrolyte, a dye-sensitized solar cell was fabricated using N3 dye adsorbed over TiO{sub 2} nanoparticles as the photoanode and conducting carbon cement coated FTO as counter electrode.

  18. Examination of fluorination effect on physical properties of saturated long-chain alcohols by DSC and Langmuir monolayer.

    Science.gov (United States)

    Nakahara, Hiromichi; Nakamura, Shohei; Okahashi, Yoshinori; Kitaguchi, Daisuke; Kawabata, Noritake; Sakamoto, Seiichi; Shibata, Osamu

    2013-02-01

    Partially fluorinated long-chain alcohols have been newly synthesized from a radical reaction, which is followed by a reductive reaction. The fluorinated alcohols have been investigated by differential scanning calorimetry (DSC) and compression isotherms in a Langmuir monolayer state. Their melting points increase with an increase in chain length due to elongation of methylene groups. However, the melting points for the alcohols containing shorter fluorinated moieties are lower than those for the typical hydrogenated fatty alcohols. Using the Langmuir monolayer technique, surface pressure (π)-molecular area (A) and surface potential (ΔV)-A isotherms of monolayers of the fluorinated alcohols have been measured in the temperature range from 281.2 to 303.2K. In addition, a compressibility modulus (Cs(-1)) is calculated from the π-A isotherms. Four kinds of the alcohol monolayers show a phase transition (π(eq)) from a disordered to an ordered state upon lateral compression. The π(eq) values increase linearly with increasing temperatures. A slope of π(eq) against temperature for the alcohols with shorter fluorocarbons is unexpectedly larger than that for the corresponding fatty alcohols. Generally, fluorinated amphiphiles have a greater thermal stability (or resistance), which is a characteristic of highly fluorinated or perfluorinated compounds. Herein, however, the alcohols containing perfluorobutylated and perfluorohexylated chains show the irregular thermal behavior in both the solid and monolayer states. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Visualizing Debugging Activity in Source Code Repositories

    OpenAIRE

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    We present the use of the CVSgrab visualization tool for understanding the debugging activity in the Mozilla project. We show how to display the distribution of different bug types over the project structure, locate project components which undergo heavy debugging activity, and get insight in the bug evolution in time.

  20. Visualizing Debugging Activity in Source Code Repositories

    NARCIS (Netherlands)

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    We present the use of the CVSgrab visualization tool for understanding the debugging activity in the Mozilla project. We show how to display the distribution of different bug types over the project structure, locate project components which undergo heavy debugging activity, and get insight in the

  1. Solving Semantic Searches for Source Code

    Science.gov (United States)

    2012-11-01

    but of input and expected output pairs. In this domain, those inputs take the form of strings and outputs could be one of sev- eral datatypes ...for some relaxation of CPi that yields C ′ Pi . Encoding weakening is performed by systematically making the constraints on a particular datatype ...the datatypes that can hold concrete or symbolic values: integers, characters, booleans, and strings. The Java implementation uses all the data types

  2. System Data Model (SDM) Source Code

    Science.gov (United States)

    2012-08-23

    default to a different encoding, for 13: example ISO -8859-1. When dftables is run, it creates these tables in the 14: current locale. If PCRE is...92, 92, /* U+26800 */ 557: 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, /* U+ 27000 */ 558: 92, 92, 92, 92, 92, 92, 92, 92, 92

  3. Recovering management information from source code

    NARCIS (Netherlands)

    Kwiatkowski, L.; Verhoef, C.

    2013-01-01

    IT has become a production means for many organizations and an important element of business strategy. Even though its effective management is a must, reality shows that this area still remains in its infancy. IT management relies profoundly on relevant information which enables risk mitigation or

  4. Code portability and data management considerations in the SAS3D LMFBR accident-analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.

    1981-01-01

    The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available

  5. Development of a certified reference material for calibration of DSC and DTA below room temperature: NMIJ CRM 5401-a, Cyclohexane for Thermal Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, Yoshitaka, E-mail: y-shimizu@aist.go.jp; Ohte, Yoko; Kato, Kenji

    2013-09-20

    Highlights: • We developed a new CRM for quality assurance of DSC and DTA below room temperature. • Certified values are temperatures and enthalpies of two phase transitions. • Certified values agree with literature values. • Certified values are determined by adiabatic calorimetry and traceable to the SI. • Purity of this CRM was confirmed more than 0.9999. - Abstract: For the quality assurance of performance of differential scanning calorimeters (DSC) and differential thermal analyzers (DTA) below room temperature, we have developed “NMIJ CRM 5401-a, Cyclohexane for Thermal Analysis” applicable to calibration of DSC and DTA in the low temperature. Adiabatic calorimetry was used to measure the temperatures and enthalpies of solid–solid phase transition and fusion as certified values, and to determine the purity in amount of substance fraction as information. The certified values are consistent with their corresponding literature values within expanded uncertainties and have traceability to the SI. Purity in amount of substance fraction was measured by fractional melting method based on freezing point depression method and was confirmed to be more than 0.9999. NMIJ CRM 5401-a was produced based on a quality system in compliance with ISO Guide 34: 2000. We demonstrate the usefulness of NMIJ CRM 5401-a in the calibration, quality control, and validation aspects of DSC and DTA.

  6. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  7. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  8. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    Science.gov (United States)

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available (http://sites.google.com/site/RTMocap/) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation.

  9. The intercomparison of aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Gauvain, J.

    1988-01-01

    The behavior of aerosols in a reactor containment vessel following a severe accident could be an important determinant of the accident source term to the environment. Various processes result in the deposition of the aerosol onto surfaces within the containment, from where they are much less likely to be released. Some of these processes are very sensitive to particle size, so it is important to model the aerosol growth processes: agglomeration and condensation. A number of computer codes have been written to model growth and deposition processes. They have been tested against each other in a series of code comparison exercises. These exercises have investigated sensitivities to physical and numerical assumptions and have also proved a useful means of quality control for the codes. Various exercises in which code predictions are compared with experimental results are now under way

  10. LFSC - Linac Feedback Simulation Code

    International Nuclear Information System (INIS)

    Ivanov, Valentin; Fermilab

    2008-01-01

    The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output

  11. The Multidimensional Influence of Acculturation on Digit Symbol-Coding and Wisconsin Card Sorting Test in Hispanics.

    Science.gov (United States)

    Krch, Denise; Lequerica, Anthony; Arango-Lasprilla, Juan Carlos; Rogers, Heather L; DeLuca, John; Chiaravalloti, Nancy D

    2015-01-01

    The purpose of the current study was to evaluate the relative contribution of acculturation to two tests of nonverbal test performance in Hispanics. This study compared 40 Hispanic and 20 non-Hispanic whites on Digit Symbol-Coding (DSC) and the Wisconsin Card Sorting Test (WCST) and evaluated the relative contribution of the various acculturation components to cognitive test performance in the Hispanic group. Hispanics performed significantly worse on DSC and WCST relative to non-Hispanic whites. Multiple regressions conducted within the Hispanic group revealed that language use uniquely accounted for 11.0% of the variance on the DSC, 18.8% of the variance on WCST categories completed, and 13.0% of the variance in perseverative errors on the WCST. Additionally, years of education in the United States uniquely accounted for 14.9% of the variance in DSC. The significant impact of acculturation on DSC and WCST lends support that nonverbal cognitive tests are not necessarily culture free. The differential contribution of acculturation proxies highlights the importance of considering these separate components when interpreting performance on neuropsychological tests in clinical and research settings. Factors, such as the country where education was received, may in fact be more meaningful information than the years of education of education attained. Thus, acculturation should be considered an important factor in any cognitive evaluation of culturally diverse individuals.

  12. Diagnostic Coding for Epilepsy.

    Science.gov (United States)

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  13. Coding of Neuroinfectious Diseases.

    Science.gov (United States)

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  14. X-ray image coding

    International Nuclear Information System (INIS)

    1974-01-01

    The invention aims at decreasing the effect of stray radiation in X-ray images. This is achieved by putting a plate between source and object with parallel zones of alternating high and low absorption coefficients for X-radiation. The image is scanned with the help of electronic circuits which decode the signal space coded by the plate, thus removing the stray radiation

  15. CVSscan : Visualization of Code Evolution

    NARCIS (Netherlands)

    Voinea, Lucian; Telea, Alex; Wijk, Jarke J. van

    2005-01-01

    During the life cycle of a software system, the source code is changed many times. We study how developers can be enabled to get insight in these changes, in order to understand the status, history and structure better, as well as for instance the roles played by various contributors. We present

  16. Computer code abstract: NESTLE

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1995-01-01

    NESTLE is a few-group neutron diffusion equation solver utilizing the nodal expansion method (NEM) for eigenvalue, adjoint, and fixed-source steady-state and transient problems. The NESTLE code solve the eigenvalue (criticality), eigenvalue adjoint, external fixed-source steady-state, and external fixed-source or eigenvalue initiated transient problems. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two- or four-energy groups can be utilized, with all energy groups being thermal groups (i.e., upscatter exits) is desired. Core geometries modeled include Cartesian and hexagonal. Three-, two-, and one-dimensional models can be utilized with various symmetries. The thermal conditions predicted by the thermal-hydraulic model of the core are used to correct cross sections for temperature and density effects. Cross sections for temperature and density effects. Cross sections are parameterized by color, control rod state (i.e., in or out), and burnup, allowing fuel depletion to be modeled. Either a macroscopic or microscopic model may be employed

  17. Characterisation of 1,3-diammonium propylselenate monohydrate by XRD, FT-IR, FT-Raman, DSC and DFT studies

    Science.gov (United States)

    Thirunarayanan, S.; Arjunan, V.; Marchewka, M. K.; Mohan, S.; Atalay, Yusuf

    2016-03-01

    The crystals of 1,3-diammonium propylselenate monohydrate (DAPS) were prepared and characterised X-ray diffraction (XRD), FT-IR, FT-Raman spectroscopy, and DFT/B3LYP methods. It comprises protonated propyl ammonium moieties (diammonium propyl cations), selenate anions and water molecule which are held together by a number of hydrogen bonds and form infinite chains. The XRD data confirm the transfer of two protons from selenic acid to 1,3-diaminopropane molecule. The DAPS complex is stabilised by the presence of O-H···O and N-H···O hydrogen bonds and the electrostatic interactions as well. The N···O and O···O bond distances are 2.82-2.91 and 2.77 Å, respectively. The FT-IR and FT-Raman spectra of 1,3-diammonium propyl selenate monohydrate are recorded and the complete vibrational assignments have been discussed. The geometry is optimised by B3LYP method using 6-311G, 6-311+G and 6-311+G* basis sets and the energy, structural parameters, vibrational frequencies, IR and Raman intensities are determined. Differential scanning colorimetry (DSC) data were also presented to analyse the possibility of the phase transition. Complete natural bonding orbital (NBO) analysis is carried out to analyse the intramolecular electronic interactions and their stabilisation energies. The electrostatic potential of the complex lies in the range +1.902e × 10-2 to -1.902e × 10-2. The limits of total electron density of the complex is +8.43e × 10-2 to -8.43e × 10-2.

  18. Variable code gamma ray imaging system

    International Nuclear Information System (INIS)

    Macovski, A.; Rosenfeld, D.

    1979-01-01

    A gamma-ray source distribution in the body is imaged onto a detector using an array of apertures. The transmission of each aperture is modulated using a code such that the individual views of the source through each aperture can be decoded and separated. The codes are chosen to maximize the signal to noise ratio for each source distribution. These codes determine the photon collection efficiency of the aperture array. Planar arrays are used for volumetric reconstructions and circular arrays for cross-sectional reconstructions. 14 claims

  19. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  20. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  1. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  2. The computer code system for reactor radiation shielding in design of nuclear power plant

    International Nuclear Information System (INIS)

    Li Chunhuai; Fu Shouxin; Liu Guilian

    1995-01-01

    The computer code system used in reactor radiation shielding design of nuclear power plant includes the source term codes, discrete ordinate transport codes, Monte Carlo and Albedo Monte Carlo codes, kernel integration codes, optimization code, temperature field code, skyshine code, coupling calculation codes and some processing codes for data libraries. This computer code system has more satisfactory variety of codes and complete sets of data library. It is widely used in reactor radiation shielding design and safety analysis of nuclear power plant and other nuclear facilities

  3. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  4. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  5. Simultaneous formation and detection of the reaction product of solid-state aspartame sweetener by FT-IR/DSC microscopic system.

    Science.gov (United States)

    Lin, S Y; Cheng, Y D

    2000-10-01

    The solid-state stability of aspartame hemihydrate (APM) sweetener during thermal treatment is important information for the food industry. The present study uses the novel technique of Fourier transform infrared microspectroscopy equipped with differential scanning calorimetry (FT-IR/DSC microscopic system) to accelerate and determine simultaneously the thermal-dependent impurity formation of solid-state APM. The results indicate a dramatic change in IR spectra from 50, 110 or 153 degrees C, which was respectively attributed to the onset temperature of water evaporation, dehydration and cyclization processes. It is suggested that the processes of dehydration and intramolecular cyclization occurred in the solid-state APM during the heating process. As an impurity, 3-carboxymethyl-6-benzyl-2,5-diketopiperazine (DKP) degraded from solid state APM via intramolecular cyclization and liberation of methanol. This was evidenced by this novel FT-IR/DSC microscopic system in a one-step procedure.

  6. Thermal dehydration of cobalt and zinc formate dihydrates by controlled-rate thermogravimetry (CRTG) and simultaneous X-ray diffractometry-differential scanning calorimetry (XRD-DSC)

    International Nuclear Information System (INIS)

    Arii, T.; Kishi, A.

    1999-01-01

    The thermal dehydration study of the similar hydrated salts, cobalt and zinc formate dihydrates, have been carried out successfully by means of X-ray diffractometry-differential scanning calorimetry (XRD-DSC) and controlled-rate thermogravimetry (CRTG). X-ray diffraction analysis recorded simultaneously indicates that the resulting anhydrous product, Zn(HCO 2 ) 2 , was crystalline, while Co(HCO 2 ) 2 was amorphous.The XRD-DSC data are proven to be invaluable in verifying the interpretation of overlapping processes in thermal events. In addition, these differences in the resulting anhydrous products can be explained from kinetic analysis results based on the CRTG data. The kinetic mechanism governing the dehydration of zinc formate dihydrate is a nucleation and growth process, while in the case of cobalt formate dihydrate a phase boundary controlled reaction is the governing mechanism. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  7. Study of the developed precipitates in Al-0.63Mg-0.37Si-0.5Cu (wt.%) alloy by using DSC and TEM techniques

    Energy Technology Data Exchange (ETDEWEB)

    Gaber, A. [Physics Department, Faculty of Science, Assiut University (Egypt)]. E-mail: gaberaf@acc.aun.edu.eg; Ali, A. Mossad [Physics Department, Faculty of Science, Assiut University (Egypt); Matsuda, K. [Faculty of Engineering, University of Toyama (Japan); Kawabata, T. [Faculty of Engineering, University of Toyama (Japan); Yamazaki, T. [Faculty of Engineering, University of Toyama (Japan); Ikeno, S. [Faculty of Engineering, University of Toyama (Japan)

    2007-04-25

    Heat treatable Al-Mg-Si containing Cu alloys can be strengthened by the precipitation of the nano-scale metastable precipitates. In order to follow the precipitation sequence in balanced Al-1 mass%Mg{sub 2}Si containing 0.5 mass%Cu during continuous heating, differential scanning calorimetry (DSC) was performed. Analysis of non-isothermal DSC scans at various heating rates were carried out to evaluate the overall activation energies associated with the precipitation processes and, therefore, the mechanism of the developed precipitates has been characterized. The most important developed precipitates that assist the strength of the alloy are random, Q' and {beta}' precipitates. According to the obtained activation energies, the kinetics of the evolved Q'-precipitates could be controlled by the diffusion of Mg, Si and Cu in the crystal lattice of the alloy. Both conventional and high resolution transmission electron microscopy (HRTEM) were utilized to confirm the obtained results.

  8. Short chain lead (II) alkanoates as ionic liquids and glass formers: A d.s.c., X-ray diffraction and FTIR spectroscopy study

    International Nuclear Information System (INIS)

    Martinez Casado, F.J.; Sanchez Arenas, A.; Garcia Perez, M.V.; Redondo Yelamos, M.I.; Lopez de Andres, S.; Cheda, J.A.R.

    2007-01-01

    Three members of the lead (II) n-alkanoates (from etanoate to n-butanoate) have been synthesized, purified and studied by d.s.c., X-ray diffraction, and FTIR spectroscopy. Lead (II) acetate, propanoate, and butanoate present only a melting transition at T = (452.6, 398.2, and 346.5) K, with Δ f H = (16.0, 13.1, and 15.6) kJ . mol -1 , and Δ f S (35.3, 32.8, and 45.1) J . mol -1 . K -1 , respectively. These temperature data correct to a great extent the historical values reported in the literature. These three members readily quench into a glass state. Their corresponding T g values are (314.4, 289.0, and 274.9) K, respectively, measured by d.s.c. at a heating rate of 5 K . min -1

  9. The RETRAN-03 computer code

    International Nuclear Information System (INIS)

    Paulsen, M.P.; McFadden, J.H.; Peterson, C.E.; McClure, J.A.; Gose, G.C.; Jensen, P.J.

    1991-01-01

    The RETRAN-03 code development effort is designed to overcome the major theoretical and practical limitations associated with the RETRAN-02 computer code. The major objectives of the development program are to extend the range of analyses that can be performed with RETRAN, to make the code more dependable and faster running, and to have a more transportable code. The first two objectives are accomplished by developing new models and adding other models to the RETRAN-02 base code. The major model additions for RETRAN-03 are as follows: implicit solution methods for the steady-state and transient forms of the field equations; additional options for the velocity difference equation; a new steady-state initialization option for computer low-power steam generator initial conditions; models for nonequilibrium thermodynamic conditions; and several special-purpose models. The source code and the environmental library for RETRAN-03 are written in standard FORTRAN 77, which allows the last objective to be fulfilled. Some models in RETRAN-02 have been deleted in RETRAN-03. In this paper the changes between RETRAN-02 and RETRAN-03 are reviewed

  10. Characterization of Two Different Clay Materials by Thermogravimetry (TG), Differential Scanning Calorimetry (DSC), Dilatometry (DIL) and Mass Spectrometry (MS) - 12215

    Energy Technology Data Exchange (ETDEWEB)

    Post, Ekkehard [NETZSCH Geraetebau GmbH, Wittelsbacherstrasse 42, 95100 Selb (Germany); Henderson, Jack B. [NETZSCH Instruments North America, LLC, 129 Middlesex Turnpike, Burlington, MA 01803 (United States)

    2012-07-01

    An illitic clay containing higher amounts of organic materials was investigated by dilatometry, thermogravimetry and differential scanning calorimetric. The evolved gases were studied during simultaneous TG-DSC (STA) and dilatometer measurements with simultaneous mass spectrometry in inert gas and oxidizing atmosphere. The dilatometer results were compared with the STA-MS results which confirmed and explained the reactions found during heating of the clay, like dehydration, dehydroxylation, shrinkage, sintering, quartz phase transition, combustion or pyrolysis of organics and the solid state reactions forming meta-kaolinite and mullite. The high amount of organic material effects in inert gas atmosphere most probably a reduction of the oxides which leads to a higher mass loss than in oxidizing atmosphere. Due to this reduction an additional CO{sub 2} emission at around 1000 deg. C was detected which did not occur in oxidizing atmosphere. Furthermore TG-MS results of a clay containing alkali nitrates show that during heating, in addition to water and CO{sub 2}, NO and NO{sub 2} are also evolved, leading to additional mass loss steps. These types of clays showed water loss starting around 100 deg. C or even earlier. This relative small mass loss affects only less shrinkage during the expansion of the sample. The dehydroxylation and the high crystalline quartz content result in considerable shrinkage and expansion of the clay. During the usual solid state reaction where the clay structure collapses, the remaining material finally shrinks down to a so-called clinker. With the help of MS the TG steps can be better interpreted as the evolved gases are identified. With the help of the MS it is possible to distinguish between CO{sub 2} and water (carbonate decomposition, oxidation of organics or dehydration/dehydroxylation). The MS also clearly shows that mass number 44 is found during the TG step of the illitic clay at about 900 deg. C in inert gas, which was interpreted

  11. Optimization of DSC MRI Echo Times for CBV Measurements Using Error Analysis in a Pilot Study of High-Grade Gliomas.

    Science.gov (United States)

    Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C

    2017-09-01

    The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.

  12. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  13. Performance evaluation based on data from code reviews

    OpenAIRE

    Andrej, Sekáč

    2016-01-01

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in ...

  14. Multivariate analysis of DSC-XRD simultaneous measurement data: a study of multistage crystalline structure changes in a linear poly(ethylene imine) thin film.

    Science.gov (United States)

    Kakuda, Hiroyuki; Okada, Tetsuo; Otsuka, Makoto; Katsumoto, Yukiteru; Hasegawa, Takeshi

    2009-01-01

    A multivariate analytical technique has been applied to the analysis of simultaneous measurement data from differential scanning calorimetry (DSC) and X-ray diffraction (XRD) in order to study thermal changes in crystalline structure of a linear poly(ethylene imine) (LPEI) film. A large number of XRD patterns generated from the simultaneous measurements were subjected to an augmented alternative least-squares (ALS) regression analysis, and the XRD patterns were readily decomposed into chemically independent XRD patterns and their thermal profiles were also obtained at the same time. The decomposed XRD patterns and the profiles were useful in discussing the minute peaks in the DSC. The analytical results revealed the following changes of polymorphisms in detail: An LPEI film prepared by casting an aqueous solution was composed of sesquihydrate and hemihydrate crystals. The sesquihydrate one was lost at an early stage of heating, and the film changed into an amorphous state. Once the sesquihydrate was lost by heating, it was not recovered even when it was cooled back to room temperature. When the sample was heated again, structural changes were found between the hemihydrate and the amorphous components. In this manner, the simultaneous DSC-XRD measurements combined with ALS analysis proved to be powerful for obtaining a better understanding of the thermally induced changes of the crystalline structure in a polymer film.

  15. Determination of the thermo-mechanical properties in starch and starch/gluten systems at low moisture content - a comparison of DSC and TMA.

    Science.gov (United States)

    Homer, Stephen; Kelly, Michael; Day, Li

    2014-08-08

    The impact of heating rate on the glass transition (Tg) and melting transitions observed by differential scanning calorimetry (DSC) on starch and a starch/gluten blend (80:20 ratio) at low moisture content was examined. The results were compared to those determined by thermo-mechanical analysis (TMA). Comparison with dynamic mechanical thermal analysis (DMTA) and phase transition analysis (PTA) is also discussed. Higher heating rates increased the determined Tg as well as the melting peak temperatures in both starch and the starch/gluten blend. A heating rate of 5°C/min gave the most precise value of Tg while still being clearly observed above the baseline. Tg values determined from the first and second DSC scans were found to differ significantly and retrogradation of starch biopolymers may be responsible. Tg values of starch determined by TMA showed good agreement with DSC results where the Tg was below 80°C. However, moisture loss led to inaccurate Tg determination for TMA analyses at temperatures above 80°C. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. GOC: General Orbit Code

    International Nuclear Information System (INIS)

    Maddox, L.B.; McNeilly, G.S.

    1979-08-01

    GOC (General Orbit Code) is a versatile program which will perform a variety of calculations relevant to isochronous cyclotron design studies. In addition to the usual calculations of interest (e.g., equilibrium and accelerated orbits, focusing frequencies, field isochronization, etc.), GOC has a number of options to calculate injections with a charge change. GOC provides both printed and plotted output, and will follow groups of particles to allow determination of finite-beam properties. An interactive PDP-10 program called GIP, which prepares input data for GOC, is available. GIP is a very easy and convenient way to prepare complicated input data for GOC. Enclosed with this report are several microfiche containing source listings of GOC and other related routines and the printed output from a multiple-option GOC run

  17. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  18. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  19. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  20. Manually operated coded switch

    International Nuclear Information System (INIS)

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made